Test Report: Docker_Linux_crio 21969

                    
                      ab0a8cfdd326918695f502976b3bdb249954a688:2025-11-23:42465
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.26
35 TestAddons/parallel/Registry 15.47
36 TestAddons/parallel/RegistryCreds 0.41
37 TestAddons/parallel/Ingress 147.92
38 TestAddons/parallel/InspektorGadget 6.26
39 TestAddons/parallel/MetricsServer 6.31
41 TestAddons/parallel/CSI 49.06
42 TestAddons/parallel/Headlamp 2.66
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 10.16
45 TestAddons/parallel/NvidiaDevicePlugin 5.27
46 TestAddons/parallel/Yakd 6.31
47 TestAddons/parallel/AmdGpuDevicePlugin 5.26
97 TestFunctional/parallel/ServiceCmdConnect 603.06
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.03
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.89
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.73
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.25
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.48
146 TestFunctional/parallel/ServiceCmd/DeployApp 600.63
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.55
153 TestFunctional/parallel/ServiceCmd/Format 0.55
154 TestFunctional/parallel/ServiceCmd/URL 0.54
191 TestJSONOutput/pause/Command 2.38
197 TestJSONOutput/unpause/Command 1.45
286 TestPause/serial/Pause 5.97
348 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.31
351 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.32
359 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.28
361 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.23
370 TestStartStop/group/old-k8s-version/serial/Pause 7.06
376 TestStartStop/group/no-preload/serial/Pause 5.85
379 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.03
385 TestStartStop/group/embed-certs/serial/Pause 5.86
391 TestStartStop/group/newest-cni/serial/Pause 5.92
393 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.27
x
+
TestAddons/serial/Volcano (0.26s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable volcano --alsologtostderr -v=1: exit status 11 (256.936583ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:21:53.996861  116455 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:21:53.997354  116455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:53.997366  116455 out.go:374] Setting ErrFile to fd 2...
	I1123 08:21:53.997370  116455 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:21:53.997551  116455 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:21:53.997817  116455 mustload.go:66] Loading cluster: addons-450053
	I1123 08:21:53.998148  116455 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:21:53.998163  116455 addons.go:622] checking whether the cluster is paused
	I1123 08:21:53.998241  116455 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:21:53.998257  116455 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:21:53.998612  116455 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:21:54.016469  116455 ssh_runner.go:195] Run: systemctl --version
	I1123 08:21:54.016515  116455 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:21:54.034673  116455 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:21:54.135722  116455 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:21:54.135824  116455 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:21:54.166019  116455 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:21:54.166048  116455 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:21:54.166055  116455 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:21:54.166060  116455 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:21:54.166065  116455 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:21:54.166070  116455 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:21:54.166073  116455 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:21:54.166076  116455 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:21:54.166079  116455 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:21:54.166084  116455 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:21:54.166088  116455 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:21:54.166091  116455 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:21:54.166094  116455 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:21:54.166098  116455 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:21:54.166101  116455 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:21:54.166112  116455 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:21:54.166118  116455 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:21:54.166123  116455 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:21:54.166126  116455 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:21:54.166129  116455 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:21:54.166132  116455 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:21:54.166135  116455 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:21:54.166138  116455 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:21:54.166140  116455 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:21:54.166143  116455 cri.go:89] found id: ""
	I1123 08:21:54.166181  116455 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:21:54.181071  116455 out.go:203] 
	W1123 08:21:54.182245  116455 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:21:54Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:21:54Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:21:54.182265  116455 out.go:285] * 
	* 
	W1123 08:21:54.185689  116455 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:21:54.187098  116455 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.26s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.47s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.265423ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002559011s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.002865581s
addons_test.go:392: (dbg) Run:  kubectl --context addons-450053 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-450053 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-450053 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.000862838s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 ip
2025/11/23 08:22:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable registry --alsologtostderr -v=1: exit status 11 (247.472981ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:20.284562  119360 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:20.284791  119360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:20.284799  119360 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:20.284803  119360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:20.285012  119360 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:20.285275  119360 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:20.285572  119360 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:20.285587  119360 addons.go:622] checking whether the cluster is paused
	I1123 08:22:20.285664  119360 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:20.285681  119360 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:20.286061  119360 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:20.303722  119360 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:20.303785  119360 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:20.321007  119360 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:20.421526  119360 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:20.421601  119360 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:20.452033  119360 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:20.452054  119360 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:20.452058  119360 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:20.452062  119360 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:20.452073  119360 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:20.452077  119360 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:20.452080  119360 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:20.452083  119360 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:20.452085  119360 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:20.452091  119360 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:20.452094  119360 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:20.452097  119360 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:20.452100  119360 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:20.452103  119360 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:20.452106  119360 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:20.452114  119360 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:20.452120  119360 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:20.452123  119360 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:20.452126  119360 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:20.452129  119360 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:20.452132  119360 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:20.452134  119360 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:20.452137  119360 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:20.452140  119360 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:20.452142  119360 cri.go:89] found id: ""
	I1123 08:22:20.452180  119360 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:20.466170  119360 out.go:203] 
	W1123 08:22:20.467464  119360 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:20.467481  119360 out.go:285] * 
	* 
	W1123 08:22:20.470561  119360 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:20.471988  119360 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (15.47s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.307679ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-450053
addons_test.go:332: (dbg) Run:  kubectl --context addons-450053 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (252.230267ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:23.062326  119544 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:23.062470  119544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:23.062481  119544 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:23.062488  119544 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:23.062699  119544 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:23.062997  119544 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:23.063351  119544 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:23.063374  119544 addons.go:622] checking whether the cluster is paused
	I1123 08:22:23.063473  119544 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:23.063494  119544 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:23.063898  119544 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:23.082823  119544 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:23.082893  119544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:23.102988  119544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:23.203572  119544 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:23.203664  119544 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:23.232854  119544 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:23.232877  119544 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:23.232882  119544 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:23.232887  119544 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:23.232890  119544 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:23.232894  119544 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:23.232897  119544 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:23.232900  119544 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:23.232903  119544 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:23.232913  119544 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:23.232917  119544 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:23.232919  119544 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:23.232922  119544 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:23.232925  119544 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:23.232928  119544 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:23.232932  119544 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:23.232938  119544 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:23.232942  119544 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:23.232945  119544 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:23.232948  119544 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:23.232953  119544 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:23.232955  119544 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:23.232958  119544 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:23.232961  119544 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:23.232979  119544 cri.go:89] found id: ""
	I1123 08:22:23.233029  119544 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:23.247010  119544 out.go:203] 
	W1123 08:22:23.248473  119544 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:23.248495  119544 out.go:285] * 
	* 
	W1123 08:22:23.251563  119544 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:23.252721  119544 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (147.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-450053 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-450053 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-450053 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [3637a4b6-de3e-41ce-892a-ee55c2d7aa85] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [3637a4b6-de3e-41ce-892a-ee55c2d7aa85] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003353067s
I1123 08:22:26.601297  107234 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.39058165s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-450053 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-450053
helpers_test.go:243: (dbg) docker inspect addons-450053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b",
	        "Created": "2025-11-23T08:20:28.295158521Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 109264,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:20:28.326081012Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/hostname",
	        "HostsPath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/hosts",
	        "LogPath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b-json.log",
	        "Name": "/addons-450053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-450053:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-450053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b",
	                "LowerDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-450053",
	                "Source": "/var/lib/docker/volumes/addons-450053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-450053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-450053",
	                "name.minikube.sigs.k8s.io": "addons-450053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "083e8641d527fe661d34cc5e7a4eba2580f777dd70174d55fc1409cddf766614",
	            "SandboxKey": "/var/run/docker/netns/083e8641d527",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-450053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4cd69527c282009ee2878a3d65df6895580a4b156354d85d1f1be8ca8e937d8e",
	                    "EndpointID": "2d1aee6768803d88f67fd94ba3a11cfe5c0f51177cf1d1b679e4b3d4fe3c27a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "2e:13:5d:b6:84:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-450053",
	                        "439b1684c8e4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-450053 -n addons-450053
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-450053 logs -n 25: (1.158466175s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-620131 --alsologtostderr --binary-mirror http://127.0.0.1:33645 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-620131 │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │                     │
	│ delete  │ -p binary-mirror-620131                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-620131 │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:20 UTC │
	│ addons  │ disable dashboard -p addons-450053                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │                     │
	│ addons  │ enable dashboard -p addons-450053                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │                     │
	│ start   │ -p addons-450053 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:21 UTC │
	│ addons  │ addons-450053 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:21 UTC │                     │
	│ addons  │ addons-450053 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ enable headlamp -p addons-450053 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ ssh     │ addons-450053 ssh cat /opt/local-path-provisioner/pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │ 23 Nov 25 08:22 UTC │
	│ addons  │ addons-450053 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ ip      │ addons-450053 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │ 23 Nov 25 08:22 UTC │
	│ addons  │ addons-450053 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-450053                                                                                                                                                                                                                                                                                                                                                                                           │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │ 23 Nov 25 08:22 UTC │
	│ addons  │ addons-450053 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ ssh     │ addons-450053 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ addons-450053 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:23 UTC │                     │
	│ addons  │ addons-450053 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:23 UTC │                     │
	│ ip      │ addons-450053 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-450053        │ jenkins │ v1.37.0 │ 23 Nov 25 08:24 UTC │ 23 Nov 25 08:24 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:20:07
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:20:07.962581  108626 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:20:07.962692  108626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:20:07.962704  108626 out.go:374] Setting ErrFile to fd 2...
	I1123 08:20:07.962708  108626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:20:07.962913  108626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:20:07.963457  108626 out.go:368] Setting JSON to false
	I1123 08:20:07.964229  108626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3748,"bootTime":1763882260,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:20:07.964281  108626 start.go:143] virtualization: kvm guest
	I1123 08:20:07.966143  108626 out.go:179] * [addons-450053] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:20:07.967403  108626 notify.go:221] Checking for updates...
	I1123 08:20:07.967424  108626 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:20:07.968976  108626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:20:07.970272  108626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:20:07.971410  108626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:20:07.972471  108626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:20:07.973729  108626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:20:07.974936  108626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:20:07.999075  108626 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:20:07.999177  108626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:20:08.060279  108626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-23 08:20:08.048877902 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:20:08.060495  108626 docker.go:319] overlay module found
	I1123 08:20:08.062263  108626 out.go:179] * Using the docker driver based on user configuration
	I1123 08:20:08.063684  108626 start.go:309] selected driver: docker
	I1123 08:20:08.063702  108626 start.go:927] validating driver "docker" against <nil>
	I1123 08:20:08.063715  108626 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:20:08.064233  108626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:20:08.118613  108626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-23 08:20:08.109286864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:20:08.118760  108626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:20:08.118984  108626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:20:08.120776  108626 out.go:179] * Using Docker driver with root privileges
	I1123 08:20:08.122099  108626 cni.go:84] Creating CNI manager for ""
	I1123 08:20:08.122166  108626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:20:08.122178  108626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:20:08.122267  108626 start.go:353] cluster config:
	{Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 08:20:08.123768  108626 out.go:179] * Starting "addons-450053" primary control-plane node in "addons-450053" cluster
	I1123 08:20:08.124911  108626 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:20:08.126197  108626 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:20:08.127449  108626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:20:08.127488  108626 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:20:08.127497  108626 cache.go:65] Caching tarball of preloaded images
	I1123 08:20:08.127538  108626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:20:08.127614  108626 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:20:08.127629  108626 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:20:08.127990  108626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/config.json ...
	I1123 08:20:08.128031  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/config.json: {Name:mk274e1e607b83af9e40fd0d0cc8661c8ff49964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:08.145310  108626 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:20:08.145433  108626 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:20:08.145450  108626 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:20:08.145455  108626 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:20:08.145465  108626 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:20:08.145469  108626 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 08:20:20.462525  108626 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 08:20:20.462582  108626 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:20:20.462651  108626 start.go:360] acquireMachinesLock for addons-450053: {Name:mk177bc578c2349bdc0093b5404d31df1a3bbdc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:20:20.462784  108626 start.go:364] duration metric: took 102.758µs to acquireMachinesLock for "addons-450053"
	I1123 08:20:20.462814  108626 start.go:93] Provisioning new machine with config: &{Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:20:20.462913  108626 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:20:20.464808  108626 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 08:20:20.465048  108626 start.go:159] libmachine.API.Create for "addons-450053" (driver="docker")
	I1123 08:20:20.465087  108626 client.go:173] LocalClient.Create starting
	I1123 08:20:20.465191  108626 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem
	I1123 08:20:20.574868  108626 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem
	I1123 08:20:20.667339  108626 cli_runner.go:164] Run: docker network inspect addons-450053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:20:20.686522  108626 cli_runner.go:211] docker network inspect addons-450053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:20:20.686607  108626 network_create.go:284] running [docker network inspect addons-450053] to gather additional debugging logs...
	I1123 08:20:20.686627  108626 cli_runner.go:164] Run: docker network inspect addons-450053
	W1123 08:20:20.702688  108626 cli_runner.go:211] docker network inspect addons-450053 returned with exit code 1
	I1123 08:20:20.702724  108626 network_create.go:287] error running [docker network inspect addons-450053]: docker network inspect addons-450053: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-450053 not found
	I1123 08:20:20.702742  108626 network_create.go:289] output of [docker network inspect addons-450053]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-450053 not found
	
	** /stderr **
	I1123 08:20:20.702845  108626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:20:20.719295  108626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00151b330}
	I1123 08:20:20.719339  108626 network_create.go:124] attempt to create docker network addons-450053 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 08:20:20.719397  108626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-450053 addons-450053
	I1123 08:20:20.769669  108626 network_create.go:108] docker network addons-450053 192.168.49.0/24 created
	I1123 08:20:20.769701  108626 kic.go:121] calculated static IP "192.168.49.2" for the "addons-450053" container
	I1123 08:20:20.769773  108626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:20:20.786426  108626 cli_runner.go:164] Run: docker volume create addons-450053 --label name.minikube.sigs.k8s.io=addons-450053 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:20:20.804065  108626 oci.go:103] Successfully created a docker volume addons-450053
	I1123 08:20:20.804155  108626 cli_runner.go:164] Run: docker run --rm --name addons-450053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-450053 --entrypoint /usr/bin/test -v addons-450053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:20:23.952600  108626 cli_runner.go:217] Completed: docker run --rm --name addons-450053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-450053 --entrypoint /usr/bin/test -v addons-450053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (3.148382796s)
	I1123 08:20:23.952643  108626 oci.go:107] Successfully prepared a docker volume addons-450053
	I1123 08:20:23.952692  108626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:20:23.952708  108626 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:20:23.952779  108626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-450053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:20:28.223743  108626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-450053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.270913467s)
	I1123 08:20:28.223778  108626 kic.go:203] duration metric: took 4.271067025s to extract preloaded images to volume ...
	W1123 08:20:28.223886  108626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:20:28.223918  108626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:20:28.223990  108626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:20:28.279836  108626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-450053 --name addons-450053 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-450053 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-450053 --network addons-450053 --ip 192.168.49.2 --volume addons-450053:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:20:28.589572  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Running}}
	I1123 08:20:28.607516  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:28.626032  108626 cli_runner.go:164] Run: docker exec addons-450053 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:20:28.673268  108626 oci.go:144] the created container "addons-450053" has a running status.
	I1123 08:20:28.673301  108626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa...
	I1123 08:20:28.702842  108626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:20:28.727506  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:28.752192  108626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:20:28.752214  108626 kic_runner.go:114] Args: [docker exec --privileged addons-450053 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:20:28.799824  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:28.820861  108626 machine.go:94] provisionDockerMachine start ...
	I1123 08:20:28.821004  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:28.841251  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:28.841502  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:28.841515  108626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:20:28.842898  108626 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53798->127.0.0.1:32768: read: connection reset by peer
	I1123 08:20:31.988052  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-450053
	
	I1123 08:20:31.988083  108626 ubuntu.go:182] provisioning hostname "addons-450053"
	I1123 08:20:31.988154  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.006140  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:32.006362  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:32.006376  108626 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-450053 && echo "addons-450053" | sudo tee /etc/hostname
	I1123 08:20:32.158014  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-450053
	
	I1123 08:20:32.158087  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.175328  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:32.175530  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:32.175546  108626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-450053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-450053/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-450053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:20:32.317880  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:20:32.317914  108626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 08:20:32.317945  108626 ubuntu.go:190] setting up certificates
	I1123 08:20:32.317987  108626 provision.go:84] configureAuth start
	I1123 08:20:32.318067  108626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-450053
	I1123 08:20:32.336704  108626 provision.go:143] copyHostCerts
	I1123 08:20:32.336779  108626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 08:20:32.336908  108626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 08:20:32.336988  108626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 08:20:32.337059  108626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.addons-450053 san=[127.0.0.1 192.168.49.2 addons-450053 localhost minikube]
	I1123 08:20:32.413474  108626 provision.go:177] copyRemoteCerts
	I1123 08:20:32.413532  108626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:20:32.413568  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.431550  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:32.532166  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:20:32.551728  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 08:20:32.568638  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:20:32.585290  108626 provision.go:87] duration metric: took 267.278941ms to configureAuth
	I1123 08:20:32.585325  108626 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:20:32.585512  108626 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:20:32.585620  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.603673  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:32.603928  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:32.603956  108626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:20:32.883315  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:20:32.883339  108626 machine.go:97] duration metric: took 4.062439878s to provisionDockerMachine
	I1123 08:20:32.883349  108626 client.go:176] duration metric: took 12.41825642s to LocalClient.Create
	I1123 08:20:32.883368  108626 start.go:167] duration metric: took 12.418322338s to libmachine.API.Create "addons-450053"
	I1123 08:20:32.883375  108626 start.go:293] postStartSetup for "addons-450053" (driver="docker")
	I1123 08:20:32.883385  108626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:20:32.883435  108626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:20:32.883473  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.901171  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.003767  108626 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:20:33.007202  108626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:20:33.007237  108626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:20:33.007251  108626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 08:20:33.007310  108626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 08:20:33.007334  108626 start.go:296] duration metric: took 123.952679ms for postStartSetup
	I1123 08:20:33.007624  108626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-450053
	I1123 08:20:33.025023  108626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/config.json ...
	I1123 08:20:33.025363  108626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:20:33.025420  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:33.042114  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.140113  108626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:20:33.144701  108626 start.go:128] duration metric: took 12.681769644s to createHost
	I1123 08:20:33.144729  108626 start.go:83] releasing machines lock for "addons-450053", held for 12.681929129s
	I1123 08:20:33.144803  108626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-450053
	I1123 08:20:33.163635  108626 ssh_runner.go:195] Run: cat /version.json
	I1123 08:20:33.163683  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:33.163719  108626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:20:33.163792  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:33.183067  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.183067  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.333934  108626 ssh_runner.go:195] Run: systemctl --version
	I1123 08:20:33.340229  108626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:20:33.373560  108626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:20:33.377946  108626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:20:33.378051  108626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:20:33.402952  108626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:20:33.402989  108626 start.go:496] detecting cgroup driver to use...
	I1123 08:20:33.403024  108626 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:20:33.403069  108626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:20:33.418807  108626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:20:33.430720  108626 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:20:33.430772  108626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:20:33.446267  108626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:20:33.462706  108626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:20:33.543601  108626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:20:33.627527  108626 docker.go:234] disabling docker service ...
	I1123 08:20:33.627587  108626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:20:33.646200  108626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:20:33.658410  108626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:20:33.740893  108626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:20:33.822686  108626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:20:33.835050  108626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:20:33.848178  108626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:20:33.848235  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.857641  108626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:20:33.857706  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.866518  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.875329  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.883661  108626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:20:33.891437  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.899669  108626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.913120  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.921983  108626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:20:33.930259  108626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:20:33.937744  108626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:20:34.012544  108626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:20:34.152368  108626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:20:34.152466  108626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:20:34.156504  108626 start.go:564] Will wait 60s for crictl version
	I1123 08:20:34.156569  108626 ssh_runner.go:195] Run: which crictl
	I1123 08:20:34.160041  108626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:20:34.185848  108626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:20:34.185940  108626 ssh_runner.go:195] Run: crio --version
	I1123 08:20:34.213792  108626 ssh_runner.go:195] Run: crio --version
	I1123 08:20:34.245074  108626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:20:34.246171  108626 cli_runner.go:164] Run: docker network inspect addons-450053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:20:34.264304  108626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 08:20:34.268607  108626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:20:34.278765  108626 kubeadm.go:884] updating cluster {Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:20:34.278880  108626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:20:34.278930  108626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:20:34.310145  108626 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:20:34.310175  108626 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:20:34.310229  108626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:20:34.336015  108626 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:20:34.336038  108626 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:20:34.336048  108626 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 08:20:34.336187  108626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-450053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:20:34.336274  108626 ssh_runner.go:195] Run: crio config
	I1123 08:20:34.378790  108626 cni.go:84] Creating CNI manager for ""
	I1123 08:20:34.378807  108626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:20:34.378827  108626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:20:34.378850  108626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-450053 NodeName:addons-450053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:20:34.379007  108626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-450053"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:20:34.379065  108626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:20:34.387171  108626 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:20:34.387233  108626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:20:34.394757  108626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 08:20:34.406815  108626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:20:34.421412  108626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 08:20:34.434117  108626 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:20:34.437638  108626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:20:34.446915  108626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:20:34.523495  108626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:20:34.550351  108626 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053 for IP: 192.168.49.2
	I1123 08:20:34.550391  108626 certs.go:195] generating shared ca certs ...
	I1123 08:20:34.550407  108626 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.550522  108626 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 08:20:34.631919  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt ...
	I1123 08:20:34.631948  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt: {Name:mk1d675d529f1bcc6a221325ecb3a430ae98eb0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.632137  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key ...
	I1123 08:20:34.632151  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key: {Name:mk6bf8fbad88d6534617d5f3156d47b7090962e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.632225  108626 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 08:20:34.762285  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt ...
	I1123 08:20:34.762317  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt: {Name:mk2298706c07912f22208981415546e9068687dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.762489  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key ...
	I1123 08:20:34.762501  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key: {Name:mk7c4cc9c3cf8070eb9b93fc403c104fbd5f1451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.762568  108626 certs.go:257] generating profile certs ...
	I1123 08:20:34.762629  108626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.key
	I1123 08:20:34.762643  108626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt with IP's: []
	I1123 08:20:34.806101  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt ...
	I1123 08:20:34.806142  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: {Name:mkadea70419b612e10ee90d8d53591fa9403899c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.806303  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.key ...
	I1123 08:20:34.806315  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.key: {Name:mk942aad5e59dc1c80fcad11319c8264450eab2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.806388  108626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3
	I1123 08:20:34.806406  108626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 08:20:34.912467  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3 ...
	I1123 08:20:34.912496  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3: {Name:mk3cabba9cf634dbae747254f7448b700b363155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.912653  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3 ...
	I1123 08:20:34.912667  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3: {Name:mk71693029136312bbc24afc86fa26f6c7d155a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.912734  108626 certs.go:382] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt
	I1123 08:20:34.912810  108626 certs.go:386] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key
	I1123 08:20:34.912858  108626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key
	I1123 08:20:34.912873  108626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt with IP's: []
	I1123 08:20:34.929762  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt ...
	I1123 08:20:34.929780  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt: {Name:mk426e0261f644a97ff6e2c4d1cb31f04350a9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.929894  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key ...
	I1123 08:20:34.929909  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key: {Name:mkbe68a9be26abcd20e0ac51b23b6695c01dfa81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.930101  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:20:34.930136  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:20:34.930162  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:20:34.930195  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 08:20:34.930701  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:20:34.948789  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:20:34.965754  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:20:34.983196  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 08:20:35.000826  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 08:20:35.018158  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:20:35.035770  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:20:35.052886  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:20:35.069839  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:20:35.089392  108626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:20:35.101630  108626 ssh_runner.go:195] Run: openssl version
	I1123 08:20:35.107646  108626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:20:35.118644  108626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:20:35.122307  108626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:20:35.122368  108626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:20:35.155780  108626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:20:35.164557  108626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:20:35.168326  108626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:20:35.168384  108626 kubeadm.go:401] StartCluster: {Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:20:35.168476  108626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:20:35.168523  108626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:20:35.195540  108626 cri.go:89] found id: ""
	I1123 08:20:35.195604  108626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:20:35.203858  108626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:20:35.211955  108626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:20:35.212037  108626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:20:35.219756  108626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:20:35.219775  108626 kubeadm.go:158] found existing configuration files:
	
	I1123 08:20:35.219888  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:20:35.227385  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:20:35.227442  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:20:35.234750  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:20:35.242012  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:20:35.242068  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:20:35.249190  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:20:35.256480  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:20:35.256540  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:20:35.263580  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:20:35.271258  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:20:35.271324  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:20:35.278564  108626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:20:35.336219  108626 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:20:35.390486  108626 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:20:46.331818  108626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:20:46.331894  108626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:20:46.332050  108626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:20:46.332113  108626 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:20:46.332145  108626 kubeadm.go:319] OS: Linux
	I1123 08:20:46.332207  108626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:20:46.332276  108626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:20:46.332371  108626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:20:46.332425  108626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:20:46.332504  108626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:20:46.332578  108626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:20:46.332629  108626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:20:46.332667  108626 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:20:46.332779  108626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:20:46.332906  108626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:20:46.333030  108626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:20:46.333114  108626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:20:46.335290  108626 out.go:252]   - Generating certificates and keys ...
	I1123 08:20:46.335364  108626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:20:46.335438  108626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:20:46.335517  108626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:20:46.335592  108626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:20:46.335679  108626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:20:46.335755  108626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:20:46.335821  108626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:20:46.336000  108626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-450053 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:20:46.336087  108626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:20:46.336231  108626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-450053 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:20:46.336309  108626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:20:46.336407  108626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:20:46.336448  108626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:20:46.336533  108626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:20:46.336617  108626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:20:46.336729  108626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:20:46.336813  108626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:20:46.336916  108626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:20:46.337010  108626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:20:46.337120  108626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:20:46.337212  108626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:20:46.338366  108626 out.go:252]   - Booting up control plane ...
	I1123 08:20:46.338468  108626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:20:46.338559  108626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:20:46.338619  108626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:20:46.338781  108626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:20:46.338908  108626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:20:46.339064  108626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:20:46.339184  108626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:20:46.339224  108626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:20:46.339387  108626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:20:46.339539  108626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:20:46.339625  108626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501307511s
	I1123 08:20:46.339757  108626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:20:46.339824  108626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 08:20:46.339924  108626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:20:46.340064  108626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:20:46.340153  108626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.633567907s
	I1123 08:20:46.340210  108626 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.208377193s
	I1123 08:20:46.340267  108626 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002036688s
	I1123 08:20:46.340374  108626 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:20:46.340503  108626 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:20:46.340577  108626 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:20:46.340844  108626 kubeadm.go:319] [mark-control-plane] Marking the node addons-450053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:20:46.340929  108626 kubeadm.go:319] [bootstrap-token] Using token: dg55x4.9vphqzd2ayx2cukh
	I1123 08:20:46.342119  108626 out.go:252]   - Configuring RBAC rules ...
	I1123 08:20:46.342208  108626 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:20:46.342280  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:20:46.342415  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:20:46.342552  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:20:46.342696  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:20:46.342767  108626 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:20:46.342859  108626 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:20:46.342903  108626 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:20:46.342946  108626 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:20:46.342956  108626 kubeadm.go:319] 
	I1123 08:20:46.343015  108626 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:20:46.343023  108626 kubeadm.go:319] 
	I1123 08:20:46.343091  108626 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:20:46.343100  108626 kubeadm.go:319] 
	I1123 08:20:46.343128  108626 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:20:46.343182  108626 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:20:46.343230  108626 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:20:46.343236  108626 kubeadm.go:319] 
	I1123 08:20:46.343281  108626 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:20:46.343287  108626 kubeadm.go:319] 
	I1123 08:20:46.343327  108626 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:20:46.343336  108626 kubeadm.go:319] 
	I1123 08:20:46.343386  108626 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:20:46.343448  108626 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:20:46.343503  108626 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:20:46.343513  108626 kubeadm.go:319] 
	I1123 08:20:46.343598  108626 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:20:46.343693  108626 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:20:46.343705  108626 kubeadm.go:319] 
	I1123 08:20:46.343796  108626 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dg55x4.9vphqzd2ayx2cukh \
	I1123 08:20:46.343919  108626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 \
	I1123 08:20:46.343945  108626 kubeadm.go:319] 	--control-plane 
	I1123 08:20:46.343951  108626 kubeadm.go:319] 
	I1123 08:20:46.344077  108626 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:20:46.344099  108626 kubeadm.go:319] 
	I1123 08:20:46.344205  108626 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dg55x4.9vphqzd2ayx2cukh \
	I1123 08:20:46.344343  108626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 
	I1123 08:20:46.344362  108626 cni.go:84] Creating CNI manager for ""
	I1123 08:20:46.344372  108626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:20:46.345767  108626 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:20:46.346885  108626 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:20:46.351238  108626 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:20:46.351254  108626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:20:46.363723  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:20:46.561033  108626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:20:46.561129  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:46.561135  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-450053 minikube.k8s.io/updated_at=2025_11_23T08_20_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=addons-450053 minikube.k8s.io/primary=true
	I1123 08:20:46.639906  108626 ops.go:34] apiserver oom_adj: -16
	I1123 08:20:46.640040  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:47.140434  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:47.640998  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:48.140695  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:48.640265  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:49.140768  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:49.640721  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:50.140928  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:50.640158  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:51.141010  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:51.207056  108626 kubeadm.go:1114] duration metric: took 4.646004937s to wait for elevateKubeSystemPrivileges
	I1123 08:20:51.207099  108626 kubeadm.go:403] duration metric: took 16.038718258s to StartCluster
	I1123 08:20:51.207122  108626 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:51.207249  108626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:20:51.207603  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:51.207828  108626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:20:51.207860  108626 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:20:51.207934  108626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 08:20:51.208095  108626 addons.go:70] Setting yakd=true in profile "addons-450053"
	I1123 08:20:51.208130  108626 addons.go:239] Setting addon yakd=true in "addons-450053"
	I1123 08:20:51.208139  108626 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:20:51.208164  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208165  108626 addons.go:70] Setting inspektor-gadget=true in profile "addons-450053"
	I1123 08:20:51.208196  108626 addons.go:70] Setting default-storageclass=true in profile "addons-450053"
	I1123 08:20:51.208204  108626 addons.go:239] Setting addon inspektor-gadget=true in "addons-450053"
	I1123 08:20:51.208217  108626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-450053"
	I1123 08:20:51.208229  108626 addons.go:70] Setting registry-creds=true in profile "addons-450053"
	I1123 08:20:51.208243  108626 addons.go:70] Setting gcp-auth=true in profile "addons-450053"
	I1123 08:20:51.208242  108626 addons.go:70] Setting cloud-spanner=true in profile "addons-450053"
	I1123 08:20:51.208253  108626 addons.go:239] Setting addon registry-creds=true in "addons-450053"
	I1123 08:20:51.208262  108626 addons.go:239] Setting addon cloud-spanner=true in "addons-450053"
	I1123 08:20:51.208271  108626 addons.go:70] Setting ingress=true in profile "addons-450053"
	I1123 08:20:51.208279  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208288  108626 addons.go:70] Setting ingress-dns=true in profile "addons-450053"
	I1123 08:20:51.208293  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208301  108626 addons.go:239] Setting addon ingress-dns=true in "addons-450053"
	I1123 08:20:51.208328  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208345  108626 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-450053"
	I1123 08:20:51.208375  108626 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-450053"
	I1123 08:20:51.208401  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208580  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208709  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208742  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208752  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208777  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208818  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208928  108626 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-450053"
	I1123 08:20:51.208956  108626 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-450053"
	I1123 08:20:51.209023  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.209687  108626 addons.go:70] Setting storage-provisioner=true in profile "addons-450053"
	I1123 08:20:51.209713  108626 addons.go:239] Setting addon storage-provisioner=true in "addons-450053"
	I1123 08:20:51.209737  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.209909  108626 addons.go:70] Setting metrics-server=true in profile "addons-450053"
	I1123 08:20:51.209926  108626 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-450053"
	I1123 08:20:51.209934  108626 addons.go:239] Setting addon metrics-server=true in "addons-450053"
	I1123 08:20:51.209941  108626 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-450053"
	I1123 08:20:51.210173  108626 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-450053"
	I1123 08:20:51.210204  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.210222  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208281  108626 addons.go:239] Setting addon ingress=true in "addons-450053"
	I1123 08:20:51.210298  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.210696  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.210732  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.210916  108626 out.go:179] * Verifying Kubernetes components...
	I1123 08:20:51.211016  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.211093  108626 addons.go:70] Setting volcano=true in profile "addons-450053"
	I1123 08:20:51.211105  108626 addons.go:239] Setting addon volcano=true in "addons-450053"
	I1123 08:20:51.211129  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.211534  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208263  108626 mustload.go:66] Loading cluster: addons-450053
	I1123 08:20:51.209914  108626 addons.go:70] Setting volumesnapshots=true in profile "addons-450053"
	I1123 08:20:51.212175  108626 addons.go:239] Setting addon volumesnapshots=true in "addons-450053"
	I1123 08:20:51.212248  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.210223  108626 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-450053"
	I1123 08:20:51.212714  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.213155  108626 addons.go:70] Setting registry=true in profile "addons-450053"
	I1123 08:20:51.213173  108626 addons.go:239] Setting addon registry=true in "addons-450053"
	I1123 08:20:51.213200  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.213213  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208234  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.213959  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.214701  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.214824  108626 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:20:51.215098  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.215945  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.217098  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.217828  108626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:20:51.269524  108626 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 08:20:51.272124  108626 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:20:51.272145  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 08:20:51.272214  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.272373  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 08:20:51.273574  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 08:20:51.273592  108626 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 08:20:51.273686  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.276474  108626 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 08:20:51.278768  108626 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:20:51.278787  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 08:20:51.278841  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.279287  108626 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 08:20:51.281577  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 08:20:51.281610  108626 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 08:20:51.281724  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.293041  108626 addons.go:239] Setting addon default-storageclass=true in "addons-450053"
	I1123 08:20:51.302397  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.294081  108626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:20:51.301334  108626 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-450053"
	I1123 08:20:51.304995  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.305236  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.305478  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.305655  108626 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 08:20:51.305709  108626 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 08:20:51.306151  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:20:51.306899  108626 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 08:20:51.306958  108626 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:20:51.307245  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 08:20:51.307318  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.306997  108626 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 08:20:51.307565  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.307597  108626 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:20:51.307608  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 08:20:51.307664  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.307008  108626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:20:51.307736  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:20:51.307887  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.308519  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:20:51.308539  108626 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:20:51.308548  108626 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:20:51.308561  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 08:20:51.308584  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.308620  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.309439  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 08:20:51.311413  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:20:51.314223  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 08:20:51.314953  108626 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:20:51.314986  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 08:20:51.315050  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.315219  108626 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1123 08:20:51.315541  108626 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 08:20:51.317636  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 08:20:51.318867  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 08:20:51.319904  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 08:20:51.322130  108626 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 08:20:51.322148  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 08:20:51.322232  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.327230  108626 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 08:20:51.327349  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 08:20:51.328419  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 08:20:51.328537  108626 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 08:20:51.329775  108626 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 08:20:51.329796  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 08:20:51.329863  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.331465  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 08:20:51.332826  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 08:20:51.336858  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 08:20:51.336882  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 08:20:51.337037  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.338227  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.367865  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.371497  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.374206  108626 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 08:20:51.374824  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.377744  108626 out.go:179]   - Using image docker.io/busybox:stable
	I1123 08:20:51.378085  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.379186  108626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:20:51.379210  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 08:20:51.379273  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.380898  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.386582  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.392884  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.408295  108626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:20:51.408374  108626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:20:51.408584  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.409982  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.410822  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.413545  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.418029  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.424005  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.425022  108626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1123 08:20:51.425395  108626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:20:51.425442  108626 retry.go:31] will retry after 195.827486ms: ssh: handshake failed: EOF
	W1123 08:20:51.425675  108626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:20:51.425698  108626 retry.go:31] will retry after 308.481241ms: ssh: handshake failed: EOF
	I1123 08:20:51.434190  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.440322  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.442403  108626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:20:51.530897  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:20:51.530923  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:20:51.538949  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:20:51.562839  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:20:51.562868  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 08:20:51.571518  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 08:20:51.571543  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 08:20:51.584018  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 08:20:51.585705  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 08:20:51.585725  108626 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 08:20:51.586092  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:20:51.586804  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:20:51.588129  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:20:51.588149  108626 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:20:51.590392  108626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 08:20:51.590411  108626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 08:20:51.611791  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:20:51.612556  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 08:20:51.612581  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 08:20:51.615953  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 08:20:51.615985  108626 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 08:20:51.623163  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:20:51.627380  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:20:51.629336  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:20:51.629359  108626 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:20:51.639198  108626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 08:20:51.639230  108626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 08:20:51.647719  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 08:20:51.647748  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 08:20:51.674745  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:20:51.677436  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 08:20:51.677479  108626 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 08:20:51.699119  108626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 08:20:51.699148  108626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 08:20:51.702349  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 08:20:51.702377  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 08:20:51.735063  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 08:20:51.735100  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 08:20:51.737120  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:20:51.737145  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 08:20:51.755585  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 08:20:51.755616  108626 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 08:20:51.776157  108626 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 08:20:51.776841  108626 node_ready.go:35] waiting up to 6m0s for node "addons-450053" to be "Ready" ...
	I1123 08:20:51.788103  108626 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:20:51.788133  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 08:20:51.817895  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 08:20:51.817930  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 08:20:51.821383  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:20:51.853502  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:20:51.879451  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 08:20:51.879485  108626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 08:20:51.880338  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:20:51.922437  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 08:20:51.922544  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 08:20:51.937148  108626 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 08:20:51.937178  108626 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 08:20:51.974532  108626 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:20:51.974653  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 08:20:51.984304  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 08:20:51.984382  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 08:20:52.029953  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:20:52.029996  108626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 08:20:52.031584  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:20:52.063910  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:20:52.285689  108626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-450053" context rescaled to 1 replicas
	I1123 08:20:52.559317  108626 addons.go:495] Verifying addon metrics-server=true in "addons-450053"
	W1123 08:20:52.574438  108626 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 08:20:52.599304  108626 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-450053 service yakd-dashboard -n yakd-dashboard
	
	I1123 08:20:53.178505  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.324943989s)
	W1123 08:20:53.178563  108626 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:20:53.178589  108626 retry.go:31] will retry after 267.656376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:20:53.178652  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.298282875s)
	I1123 08:20:53.178693  108626 addons.go:495] Verifying addon ingress=true in "addons-450053"
	I1123 08:20:53.178721  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.147108969s)
	I1123 08:20:53.178740  108626 addons.go:495] Verifying addon registry=true in "addons-450053"
	I1123 08:20:53.178912  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.114961309s)
	I1123 08:20:53.178933  108626 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-450053"
	I1123 08:20:53.180236  108626 out.go:179] * Verifying ingress addon...
	I1123 08:20:53.180272  108626 out.go:179] * Verifying registry addon...
	I1123 08:20:53.181221  108626 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 08:20:53.182826  108626 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 08:20:53.182826  108626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 08:20:53.183919  108626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 08:20:53.185497  108626 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 08:20:53.185518  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:53.186512  108626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:20:53.186527  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:53.186576  108626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:20:53.186591  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:53.446879  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:20:53.688738  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:53.688865  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:53.689088  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 08:20:53.779491  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:20:54.186829  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:54.186829  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:54.187039  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:54.686354  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:54.686365  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:54.686370  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:55.186335  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:55.186502  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:55.186532  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:55.686439  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:55.686439  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:55.686615  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 08:20:55.779639  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:20:55.929776  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.482848682s)
	I1123 08:20:56.187067  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:56.187068  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:56.187068  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:56.686083  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:56.686083  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:56.686290  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:57.186674  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:57.186731  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:57.186810  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:57.686240  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:57.686280  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:57.686331  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 08:20:57.780038  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:20:58.186107  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:58.186173  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:58.186183  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:58.686994  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:58.687028  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:58.687138  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:58.918220  108626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 08:20:58.918290  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:58.936101  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:59.051339  108626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 08:20:59.063594  108626 addons.go:239] Setting addon gcp-auth=true in "addons-450053"
	I1123 08:20:59.063656  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:59.064090  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:59.081736  108626 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 08:20:59.081787  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:59.099192  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:59.186667  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:59.186746  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:59.186877  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:59.197427  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:20:59.198770  108626 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 08:20:59.199825  108626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 08:20:59.199844  108626 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 08:20:59.212485  108626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 08:20:59.212507  108626 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 08:20:59.224833  108626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:20:59.224856  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 08:20:59.237447  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:20:59.526081  108626 addons.go:495] Verifying addon gcp-auth=true in "addons-450053"
	I1123 08:20:59.527285  108626 out.go:179] * Verifying gcp-auth addon...
	I1123 08:20:59.529056  108626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 08:20:59.531278  108626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 08:20:59.531299  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:20:59.685937  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:59.686027  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:59.686343  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:00.031655  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:00.186548  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:00.186639  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:00.186680  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 08:21:00.280042  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:21:00.532792  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:00.686951  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:00.686961  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:00.687000  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:01.032171  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:01.186044  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:01.186089  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:01.186250  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:01.532177  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:01.685753  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:01.685986  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:01.686277  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:02.032572  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:02.186138  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:02.186189  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:02.186468  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:02.532429  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:02.686431  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:02.686442  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:02.686640  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 08:21:02.780734  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:21:03.031859  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:03.191048  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:03.192008  108626 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:21:03.192085  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:03.192054  108626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:21:03.192148  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:03.279923  108626 node_ready.go:49] node "addons-450053" is "Ready"
	I1123 08:21:03.279953  108626 node_ready.go:38] duration metric: took 11.503075337s for node "addons-450053" to be "Ready" ...
	I1123 08:21:03.279981  108626 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:21:03.280037  108626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:21:03.295295  108626 api_server.go:72] duration metric: took 12.087400322s to wait for apiserver process to appear ...
	I1123 08:21:03.295331  108626 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:21:03.295359  108626 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 08:21:03.299779  108626 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 08:21:03.300646  108626 api_server.go:141] control plane version: v1.34.1
	I1123 08:21:03.300670  108626 api_server.go:131] duration metric: took 5.331287ms to wait for apiserver health ...
	I1123 08:21:03.300679  108626 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:21:03.303614  108626 system_pods.go:59] 20 kube-system pods found
	I1123 08:21:03.303642  108626 system_pods.go:61] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending
	I1123 08:21:03.303653  108626 system_pods.go:61] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.303659  108626 system_pods.go:61] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.303667  108626 system_pods.go:61] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.303674  108626 system_pods.go:61] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending
	I1123 08:21:03.303678  108626 system_pods.go:61] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.303688  108626 system_pods.go:61] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.303695  108626 system_pods.go:61] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.303698  108626 system_pods.go:61] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.303704  108626 system_pods.go:61] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.303708  108626 system_pods.go:61] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.303712  108626 system_pods.go:61] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.303716  108626 system_pods.go:61] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.303722  108626 system_pods.go:61] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.303729  108626 system_pods.go:61] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.303733  108626 system_pods.go:61] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending
	I1123 08:21:03.303741  108626 system_pods.go:61] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending
	I1123 08:21:03.303747  108626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.303753  108626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending
	I1123 08:21:03.303758  108626 system_pods.go:61] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.303765  108626 system_pods.go:74] duration metric: took 3.079683ms to wait for pod list to return data ...
	I1123 08:21:03.303774  108626 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:21:03.305931  108626 default_sa.go:45] found service account: "default"
	I1123 08:21:03.305955  108626 default_sa.go:55] duration metric: took 2.174036ms for default service account to be created ...
	I1123 08:21:03.305985  108626 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:21:03.309230  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:03.309258  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending
	I1123 08:21:03.309271  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.309280  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.309295  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.309301  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending
	I1123 08:21:03.309309  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.309315  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.309323  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.309327  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.309338  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.309347  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.309356  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.309367  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.309378  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.309389  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.309400  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:03.309410  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending
	I1123 08:21:03.309418  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.309423  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending
	I1123 08:21:03.309435  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.309456  108626 retry.go:31] will retry after 298.833993ms: missing components: kube-dns
	I1123 08:21:03.537761  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:03.639866  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:03.639912  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:03.639923  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.639934  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.639943  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.640007  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:03.640017  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.640023  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.640029  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.640035  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.640043  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.640056  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.640063  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.640073  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.640083  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.640096  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.640106  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:03.640114  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:03.640125  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.640136  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.640148  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.640177  108626 retry.go:31] will retry after 348.707573ms: missing components: kube-dns
	I1123 08:21:03.738886  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:03.739099  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:03.739151  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:03.994128  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:03.994162  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:03.994170  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.994179  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.994186  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.994194  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:03.994202  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.994209  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.994218  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.994223  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.994234  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.994240  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.994249  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.994261  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.994270  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.994279  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.994292  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:03.994300  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:03.994308  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.994317  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.994325  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.994348  108626 retry.go:31] will retry after 358.645575ms: missing components: kube-dns
	I1123 08:21:04.032063  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:04.185957  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:04.186512  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:04.186813  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:04.358219  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:04.358256  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:04.358266  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:04.358280  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:04.358288  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:04.358297  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:04.358304  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:04.358310  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:04.358316  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:04.358349  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:04.358365  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:04.358370  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:04.358377  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:04.358388  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:04.358397  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:04.358407  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:04.358415  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:04.358423  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:04.358434  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.358445  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.358454  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:04.358473  108626 retry.go:31] will retry after 497.770376ms: missing components: kube-dns
	I1123 08:21:04.532527  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:04.687112  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:04.687212  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:04.687224  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:04.861161  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:04.861198  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:04.861210  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Running
	I1123 08:21:04.861220  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:04.861225  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:04.861231  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:04.861234  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:04.861238  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:04.861242  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:04.861246  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:04.861251  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:04.861254  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:04.861258  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:04.861263  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:04.861268  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:04.861276  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:04.861281  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:04.861286  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:04.861290  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.861299  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.861304  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Running
	I1123 08:21:04.861312  108626 system_pods.go:126] duration metric: took 1.555320579s to wait for k8s-apps to be running ...
	I1123 08:21:04.861321  108626 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:21:04.861367  108626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:21:04.874368  108626 system_svc.go:56] duration metric: took 13.038091ms WaitForService to wait for kubelet
	I1123 08:21:04.874396  108626 kubeadm.go:587] duration metric: took 13.666506996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:21:04.874421  108626 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:21:04.876958  108626 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:21:04.877006  108626 node_conditions.go:123] node cpu capacity is 8
	I1123 08:21:04.877027  108626 node_conditions.go:105] duration metric: took 2.600094ms to run NodePressure ...
	I1123 08:21:04.877045  108626 start.go:242] waiting for startup goroutines ...
	I1123 08:21:05.032883  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:05.185716  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:05.185764  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:05.186074  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:05.531683  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:05.687215  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:05.687366  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:05.687478  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:06.032995  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:06.187109  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:06.189265  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:06.189319  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:06.532145  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:06.688318  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:06.688321  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:06.688433  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:07.032977  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:07.186080  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:07.186127  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:07.186374  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:07.532681  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:07.687227  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:07.687481  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:07.687601  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:08.032609  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:08.186798  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:08.186855  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:08.186925  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:08.532006  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:08.687050  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:08.687193  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:08.687241  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:09.032750  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:09.186799  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:09.187265  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:09.187286  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:09.531612  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:09.686770  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:09.686918  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:09.687047  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:10.033257  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:10.186368  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:10.186475  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:10.186749  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:10.532781  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:10.686764  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:10.686816  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:10.686899  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:11.033168  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:11.186077  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:11.186262  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:11.186435  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:11.532432  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:11.686645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:11.686790  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:11.687347  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:12.032729  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:12.187006  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:12.187075  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:12.187277  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:12.532177  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:12.686353  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:12.686420  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:12.686993  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:13.032102  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:13.187755  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:13.188876  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:13.189034  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:13.532447  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:13.687038  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:13.688822  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:13.688915  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:14.032435  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:14.187067  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:14.187211  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:14.187244  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:14.532733  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:14.687661  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:14.687758  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:14.687794  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:15.032877  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:15.186961  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:15.187001  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:15.186991  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:15.532544  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:15.686645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:15.686642  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:15.686711  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:16.032483  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:16.187017  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:16.187043  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:16.187258  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:16.532689  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:16.687352  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:16.688158  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:16.688324  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:17.032414  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:17.186309  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:17.186480  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:17.186845  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:17.532876  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:17.685580  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:17.685633  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:17.686527  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:18.032682  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:18.187091  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:18.187139  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:18.187156  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:18.532187  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:18.685677  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:18.686181  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:18.686522  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:19.032194  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:19.186562  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:19.186588  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:19.186638  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:19.532213  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:19.686371  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:19.686898  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:19.687122  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:20.032751  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:20.186615  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:20.186666  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:20.186806  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:20.532366  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:20.686306  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:20.686499  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:20.686878  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:21.032533  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:21.187094  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:21.187437  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:21.187717  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:21.532111  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:21.686066  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:21.686146  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:21.686622  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:22.033013  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:22.185695  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:22.186321  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:22.186479  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:22.532149  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:22.686152  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:22.686311  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:22.686736  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:23.033360  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:23.186507  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:23.186744  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:23.186932  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:23.533033  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:23.685765  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:23.686020  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:23.686213  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:24.032257  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:24.186468  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:24.186695  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:24.186879  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:24.533132  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:24.709211  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:24.709363  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:24.709516  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:25.033404  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:25.186624  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:25.186762  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:25.186800  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:25.532472  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:25.686236  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:25.686279  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:25.686645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:26.032537  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:26.186385  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:26.186565  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:26.186661  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:26.533355  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:26.686701  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:26.686858  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:26.687060  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:27.032664  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:27.187055  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:27.187235  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:27.187269  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:27.535768  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:27.686841  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:27.686904  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:27.687101  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:28.031851  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:28.185727  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:28.185755  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:28.186209  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:28.532219  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:28.686012  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:28.686035  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:28.686653  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:29.031419  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:29.186356  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:29.186480  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:29.186630  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:29.533167  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:29.686142  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:29.686228  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:29.686411  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:30.034025  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:30.186368  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:30.186475  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:30.186640  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:30.532887  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:30.685912  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:30.685912  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:30.686333  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:31.033381  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:31.186619  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:31.186658  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:31.187316  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:31.532645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:31.687081  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:31.687200  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:31.687219  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:32.032986  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:32.185832  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:32.185927  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:32.186270  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:32.531689  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:32.686961  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:32.687031  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:32.687101  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:33.032520  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:33.186988  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:33.187070  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:33.187085  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:33.532284  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:33.686240  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:33.686331  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:33.686551  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:34.032832  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:34.186029  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:34.186292  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:34.186375  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:34.531993  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:34.687493  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:34.687694  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:34.687893  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:35.033472  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:35.187141  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:35.187164  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:35.187423  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:35.532471  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:35.687041  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:35.687114  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:35.687252  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:36.033233  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:36.186023  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:36.186050  108626 kapi.go:107] duration metric: took 43.003222323s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 08:21:36.186567  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:36.532688  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:36.688015  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:36.688157  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:37.032735  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:37.187324  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:37.187678  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:37.532142  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:37.686203  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:37.686868  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:38.032941  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:38.185829  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:38.186330  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:38.532846  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:38.686295  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:38.686613  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:39.033091  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:39.187175  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:39.189745  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:39.532917  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:39.688292  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:39.688873  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:40.032762  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:40.187058  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:40.187075  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:40.533003  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:40.686299  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:40.686711  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:41.032603  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:41.186769  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:41.186784  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:41.533164  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:41.685960  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:41.686779  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:42.032622  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:42.186913  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:42.186913  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:42.532830  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:42.686850  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:42.686959  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:43.031906  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:43.185633  108626 kapi.go:107] duration metric: took 50.002801925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 08:21:43.186497  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:43.532309  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:43.687208  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:44.032749  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:44.188324  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:44.532303  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:44.688062  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:45.032802  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:45.188277  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:45.532144  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:45.687185  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:46.032715  108626 kapi.go:107] duration metric: took 46.503656157s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 08:21:46.035428  108626 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-450053 cluster.
	I1123 08:21:46.036871  108626 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 08:21:46.038206  108626 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 08:21:46.187894  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:46.687363  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:47.187141  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:47.688043  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:48.187770  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:48.687340  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:49.187774  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:49.687608  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:50.187470  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:50.687318  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:51.188386  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:51.687618  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:52.187433  108626 kapi.go:107] duration metric: took 59.003509486s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 08:21:52.265809  108626 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, nvidia-device-plugin, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1123 08:21:52.267943  108626 addons.go:530] duration metric: took 1m1.059997692s for enable addons: enabled=[inspektor-gadget registry-creds nvidia-device-plugin cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1123 08:21:52.268019  108626 start.go:247] waiting for cluster config update ...
	I1123 08:21:52.268051  108626 start.go:256] writing updated cluster config ...
	I1123 08:21:52.268390  108626 ssh_runner.go:195] Run: rm -f paused
	I1123 08:21:52.272720  108626 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:21:52.276036  108626 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n2ksh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.280050  108626 pod_ready.go:94] pod "coredns-66bc5c9577-n2ksh" is "Ready"
	I1123 08:21:52.280075  108626 pod_ready.go:86] duration metric: took 4.013698ms for pod "coredns-66bc5c9577-n2ksh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.281948  108626 pod_ready.go:83] waiting for pod "etcd-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.285514  108626 pod_ready.go:94] pod "etcd-addons-450053" is "Ready"
	I1123 08:21:52.285534  108626 pod_ready.go:86] duration metric: took 3.553039ms for pod "etcd-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.287419  108626 pod_ready.go:83] waiting for pod "kube-apiserver-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.290730  108626 pod_ready.go:94] pod "kube-apiserver-addons-450053" is "Ready"
	I1123 08:21:52.290751  108626 pod_ready.go:86] duration metric: took 3.312968ms for pod "kube-apiserver-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.292403  108626 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.676138  108626 pod_ready.go:94] pod "kube-controller-manager-addons-450053" is "Ready"
	I1123 08:21:52.676171  108626 pod_ready.go:86] duration metric: took 383.745849ms for pod "kube-controller-manager-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.877186  108626 pod_ready.go:83] waiting for pod "kube-proxy-mvm7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.276222  108626 pod_ready.go:94] pod "kube-proxy-mvm7j" is "Ready"
	I1123 08:21:53.276255  108626 pod_ready.go:86] duration metric: took 399.044828ms for pod "kube-proxy-mvm7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.476786  108626 pod_ready.go:83] waiting for pod "kube-scheduler-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.876764  108626 pod_ready.go:94] pod "kube-scheduler-addons-450053" is "Ready"
	I1123 08:21:53.876791  108626 pod_ready.go:86] duration metric: took 399.975684ms for pod "kube-scheduler-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.876802  108626 pod_ready.go:40] duration metric: took 1.604052431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:21:53.922248  108626 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:21:53.923728  108626 out.go:179] * Done! kubectl is now configured to use "addons-450053" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:23:22 addons-450053 crio[772]: time="2025-11-23T08:23:22.141121652Z" level=info msg="Creating container: kube-system/registry-creds-764b6fb674-gvfgs/registry-creds" id=c93d6822-7de1-41a8-8db8-8fc3f14d64a4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:23:22 addons-450053 crio[772]: time="2025-11-23T08:23:22.141247157Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:23:22 addons-450053 crio[772]: time="2025-11-23T08:23:22.14655498Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:23:22 addons-450053 crio[772]: time="2025-11-23T08:23:22.147005414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:23:22 addons-450053 crio[772]: time="2025-11-23T08:23:22.178407296Z" level=info msg="Created container b23a63ebc59aa2e2502c33b1df9342c8f120d91f07321751341008ea2afcefe7: kube-system/registry-creds-764b6fb674-gvfgs/registry-creds" id=c93d6822-7de1-41a8-8db8-8fc3f14d64a4 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:23:22 addons-450053 crio[772]: time="2025-11-23T08:23:22.179024588Z" level=info msg="Starting container: b23a63ebc59aa2e2502c33b1df9342c8f120d91f07321751341008ea2afcefe7" id=54990471-24db-4ca7-ad9a-7f0afe46d559 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:23:22 addons-450053 crio[772]: time="2025-11-23T08:23:22.180739972Z" level=info msg="Started container" PID=8857 containerID=b23a63ebc59aa2e2502c33b1df9342c8f120d91f07321751341008ea2afcefe7 description=kube-system/registry-creds-764b6fb674-gvfgs/registry-creds id=54990471-24db-4ca7-ad9a-7f0afe46d559 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02fbd035042ae33d191902542bcacb250b9450bfce5172b90a547dd19465ba9f
	Nov 23 08:23:45 addons-450053 crio[772]: time="2025-11-23T08:23:45.620738819Z" level=info msg="Stopping pod sandbox: 05e975cca6a9c0da34de1e8292da818b0aa3e16bde9cd91feed34abda690d0f5" id=fb5d072b-2fd7-4cdf-8998-7eb93171384f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:23:45 addons-450053 crio[772]: time="2025-11-23T08:23:45.620801722Z" level=info msg="Stopped pod sandbox (already stopped): 05e975cca6a9c0da34de1e8292da818b0aa3e16bde9cd91feed34abda690d0f5" id=fb5d072b-2fd7-4cdf-8998-7eb93171384f name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:23:45 addons-450053 crio[772]: time="2025-11-23T08:23:45.621181146Z" level=info msg="Removing pod sandbox: 05e975cca6a9c0da34de1e8292da818b0aa3e16bde9cd91feed34abda690d0f5" id=5cd1294a-3100-496c-930a-b5f29e16f6fe name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:23:45 addons-450053 crio[772]: time="2025-11-23T08:23:45.625302333Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:23:45 addons-450053 crio[772]: time="2025-11-23T08:23:45.62536801Z" level=info msg="Removed pod sandbox: 05e975cca6a9c0da34de1e8292da818b0aa3e16bde9cd91feed34abda690d0f5" id=5cd1294a-3100-496c-930a-b5f29e16f6fe name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.427372805Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-bhsnh/POD" id=c4e4059a-cfa0-4473-897b-ac42d22177be name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.42745014Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.43430173Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-bhsnh Namespace:default ID:01d6a7257512b29653f31e5913ff17e2fd9d4bf0119c498437e38ade6eced6b7 UID:e6dd7e5c-1103-4275-9601-8b2eed7734ba NetNS:/var/run/netns/37b2ec10-7ed2-458d-b68b-3161723006ab Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a6a90}] Aliases:map[]}"
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.434332609Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-bhsnh to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.444606575Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-bhsnh Namespace:default ID:01d6a7257512b29653f31e5913ff17e2fd9d4bf0119c498437e38ade6eced6b7 UID:e6dd7e5c-1103-4275-9601-8b2eed7734ba NetNS:/var/run/netns/37b2ec10-7ed2-458d-b68b-3161723006ab Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a6a90}] Aliases:map[]}"
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.444723218Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-bhsnh for CNI network kindnet (type=ptp)"
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.445550041Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.44633298Z" level=info msg="Ran pod sandbox 01d6a7257512b29653f31e5913ff17e2fd9d4bf0119c498437e38ade6eced6b7 with infra container: default/hello-world-app-5d498dc89-bhsnh/POD" id=c4e4059a-cfa0-4473-897b-ac42d22177be name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.447587111Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=63c4d81e-4fea-4256-8837-7faac5d56e0d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.447745598Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=63c4d81e-4fea-4256-8837-7faac5d56e0d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.447790529Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=63c4d81e-4fea-4256-8837-7faac5d56e0d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.448526139Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=a43d99f0-023f-41d9-a18e-313f17f8d093 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:24:40 addons-450053 crio[772]: time="2025-11-23T08:24:40.456755109Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	b23a63ebc59aa       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago   Running             registry-creds                           0                   02fbd035042ae       registry-creds-764b6fb674-gvfgs            kube-system
	b2ce29cede35e       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago        Running             nginx                                    0                   a304ca89c011a       nginx                                      default
	ec30b116900af       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago        Running             busybox                                  0                   f664c97653118       busybox                                    default
	738d8d379f251       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago        Running             csi-snapshotter                          0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	d984a1356e5ec       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago        Running             csi-provisioner                          0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	f1bd36bf8d3aa       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago        Running             liveness-probe                           0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	e39671b629175       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago        Running             hostpath                                 0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	524005afa9256       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago        Running             node-driver-registrar                    0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	b833e2e0c14c4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago        Running             gcp-auth                                 0                   54652698e9b98       gcp-auth-78565c9fb4-mxx49                  gcp-auth
	0d734b50d5d0a       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago        Running             controller                               0                   4e7ba512f1fa6       ingress-nginx-controller-6c8bf45fb-k5xk4   ingress-nginx
	9a58810f94994       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            3 minutes ago        Running             gadget                                   0                   f80be77ba9e98       gadget-mblm5                               gadget
	9989944eaa26f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago        Running             registry-proxy                           0                   e8cbaccad63a6       registry-proxy-l5z45                       kube-system
	e3688d5b85c22       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago        Running             amd-gpu-device-plugin                    0                   5e2f3a85cf238       amd-gpu-device-plugin-625vc                kube-system
	8ecc013e239af       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago        Running             nvidia-device-plugin-ctr                 0                   77bf89844ed51       nvidia-device-plugin-daemonset-hpnrm       kube-system
	1dfc56fc8d94b       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago        Running             registry                                 0                   ace3557fae304       registry-6b586f9694-48d75                  kube-system
	227f1cba9bc38       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago        Running             csi-external-health-monitor-controller   0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	878966c2c1dd7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago        Running             csi-resizer                              0                   ef69f18ca58fd       csi-hostpath-resizer-0                     kube-system
	43699df157c91       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago        Running             cloud-spanner-emulator                   0                   5a15d90ad8077       cloud-spanner-emulator-5bdddb765-4vxx9     default
	819c675278b23       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              patch                                    0                   4d4b5f1b9fbd9       ingress-nginx-admission-patch-5fm9n        ingress-nginx
	f9cd2adc0709d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   a4d2d6fec0f56       snapshot-controller-7d9fbc56b8-jgxr4       kube-system
	a6ff371d12340       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago        Running             volume-snapshot-controller               0                   ae36487e3d6b6       snapshot-controller-7d9fbc56b8-c52wh       kube-system
	492b96400c0c2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago        Exited              create                                   0                   de07514140f90       ingress-nginx-admission-create-ll2hd       ingress-nginx
	c414e963396a7       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago        Running             yakd                                     0                   b8b58d174c021       yakd-dashboard-5ff678cb9-289rv             yakd-dashboard
	8364e195c165b       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago        Running             csi-attacher                             0                   acaf80940476e       csi-hostpath-attacher-0                    kube-system
	d99d5e6604fb8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago        Running             local-path-provisioner                   0                   0a11f64d4448e       local-path-provisioner-648f6765c9-xkbws    local-path-storage
	bca140d99c87f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago        Running             minikube-ingress-dns                     0                   98d3c1d00a63d       kube-ingress-dns-minikube                  kube-system
	0e62c249e71fe       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago        Running             metrics-server                           0                   2ab31846fb9a1       metrics-server-85b7d694d7-74pfv            kube-system
	4ea39bfdb1b8e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago        Running             storage-provisioner                      0                   f5bb3744257d6       storage-provisioner                        kube-system
	fc8e0ddc56a4b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago        Running             coredns                                  0                   f546e1227645b       coredns-66bc5c9577-n2ksh                   kube-system
	f5b7d2b9fc7fd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             3 minutes ago        Running             kube-proxy                               0                   bf0f80add7e5e       kube-proxy-mvm7j                           kube-system
	204df826a5f7f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago        Running             kindnet-cni                              0                   3720309150b27       kindnet-w25rx                              kube-system
	2b03a5a989737       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago        Running             kube-apiserver                           0                   46716e45f5c05       kube-apiserver-addons-450053               kube-system
	5ce09f86a113c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago        Running             kube-scheduler                           0                   73ff089ec72a8       kube-scheduler-addons-450053               kube-system
	3d0e901e59417       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago        Running             kube-controller-manager                  0                   e84a186999b88       kube-controller-manager-addons-450053      kube-system
	58c0dd74075cf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago        Running             etcd                                     0                   13db6d055a8b9       etcd-addons-450053                         kube-system
	
	
	==> coredns [fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e] <==
	[INFO] 10.244.0.22:55824 - 50577 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000162677s
	[INFO] 10.244.0.22:51376 - 53762 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004809748s
	[INFO] 10.244.0.22:56322 - 35652 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005280922s
	[INFO] 10.244.0.22:35576 - 46405 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004460737s
	[INFO] 10.244.0.22:34495 - 30728 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004561791s
	[INFO] 10.244.0.22:53202 - 15769 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004608446s
	[INFO] 10.244.0.22:55965 - 16469 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004902531s
	[INFO] 10.244.0.22:32940 - 62051 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001179393s
	[INFO] 10.244.0.22:50614 - 42059 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001248579s
	[INFO] 10.244.0.27:58162 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000295979s
	[INFO] 10.244.0.27:43902 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000162461s
	[INFO] 10.244.0.31:56738 - 24297 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000275038s
	[INFO] 10.244.0.31:33802 - 65194 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00034829s
	[INFO] 10.244.0.31:47785 - 35110 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000143715s
	[INFO] 10.244.0.31:57532 - 17888 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00018982s
	[INFO] 10.244.0.31:36161 - 6614 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.00011753s
	[INFO] 10.244.0.31:57006 - 46769 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000145912s
	[INFO] 10.244.0.31:60937 - 48186 "AAAA IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.004898983s
	[INFO] 10.244.0.31:59955 - 17564 "A IN accounts.google.com.europe-west2-a.c.k8s-minikube.internal. udp 76 false 512" NXDOMAIN qr,rd,ra 187 0.005069914s
	[INFO] 10.244.0.31:42295 - 20067 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004278629s
	[INFO] 10.244.0.31:49651 - 58463 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004992466s
	[INFO] 10.244.0.31:35305 - 59326 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004558656s
	[INFO] 10.244.0.31:54318 - 14277 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004923053s
	[INFO] 10.244.0.31:60251 - 32797 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001653105s
	[INFO] 10.244.0.31:45605 - 32879 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001741029s
	
	
	==> describe nodes <==
	Name:               addons-450053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-450053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=addons-450053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_20_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-450053
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-450053"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:20:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-450053
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:24:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:24:09 +0000   Sun, 23 Nov 2025 08:20:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:24:09 +0000   Sun, 23 Nov 2025 08:20:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:24:09 +0000   Sun, 23 Nov 2025 08:20:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:24:09 +0000   Sun, 23 Nov 2025 08:21:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-450053
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9cca0d89-df6b-42c0-91ac-94fbf27bab0b
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  default                     cloud-spanner-emulator-5bdddb765-4vxx9      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  default                     hello-world-app-5d498dc89-bhsnh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-mblm5                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  gcp-auth                    gcp-auth-78565c9fb4-mxx49                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-k5xk4    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m49s
	  kube-system                 amd-gpu-device-plugin-625vc                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 coredns-66bc5c9577-n2ksh                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m50s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 csi-hostpathplugin-kgwc9                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 etcd-addons-450053                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m56s
	  kube-system                 kindnet-w25rx                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m51s
	  kube-system                 kube-apiserver-addons-450053                250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-controller-manager-addons-450053       200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 kube-proxy-mvm7j                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m51s
	  kube-system                 kube-scheduler-addons-450053                100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 metrics-server-85b7d694d7-74pfv             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m49s
	  kube-system                 nvidia-device-plugin-daemonset-hpnrm        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 registry-6b586f9694-48d75                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 registry-creds-764b6fb674-gvfgs             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  kube-system                 registry-proxy-l5z45                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m38s
	  kube-system                 snapshot-controller-7d9fbc56b8-c52wh        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 snapshot-controller-7d9fbc56b8-jgxr4        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m48s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  local-path-storage          local-path-provisioner-648f6765c9-xkbws     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-289rv              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m48s  kube-proxy       
	  Normal  Starting                 3m56s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m56s  kubelet          Node addons-450053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m56s  kubelet          Node addons-450053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m56s  kubelet          Node addons-450053 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m51s  node-controller  Node addons-450053 event: Registered Node addons-450053 in Controller
	  Normal  NodeReady                3m38s  kubelet          Node addons-450053 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 82 4b 59 78 74 08 06
	[Nov23 08:13] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 73 2a 74 8f 84 08 06
	[Nov23 08:22] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.017594] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023854] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023902] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.024926] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.022928] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +2.047819] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +4.031665] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +8.255342] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[Nov23 08:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[ +32.253523] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	
	
	==> etcd [58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354] <==
	{"level":"warn","ts":"2025-11-23T08:20:42.561653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.568446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.574822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.582319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.589761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.598120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.603907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.609784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.616911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.623677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.629844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.635962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.641963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.656735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.663735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.677158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.680525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.686257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.691982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:53.686006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:53.691571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:21:18.218001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:21:18.224278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:21:18.237440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58474","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:21:58.015088Z","caller":"traceutil/trace.go:172","msg":"trace[1394206225] transaction","detail":"{read_only:false; response_revision:1230; number_of_response:1; }","duration":"144.604929ms","start":"2025-11-23T08:21:57.870466Z","end":"2025-11-23T08:21:58.015071Z","steps":["trace[1394206225] 'process raft request'  (duration: 112.722637ms)","trace[1394206225] 'compare'  (duration: 31.741613ms)"],"step_count":2}
	
	
	==> gcp-auth [b833e2e0c14c495f4822401a4e678bc4bf1b3c659b58dd7ec5a2f7fb8f13b8e0] <==
	2025/11/23 08:21:45 GCP Auth Webhook started!
	2025/11/23 08:21:54 Ready to marshal response ...
	2025/11/23 08:21:54 Ready to write response ...
	2025/11/23 08:21:54 Ready to marshal response ...
	2025/11/23 08:21:54 Ready to write response ...
	2025/11/23 08:21:54 Ready to marshal response ...
	2025/11/23 08:21:54 Ready to write response ...
	2025/11/23 08:22:05 Ready to marshal response ...
	2025/11/23 08:22:05 Ready to write response ...
	2025/11/23 08:22:05 Ready to marshal response ...
	2025/11/23 08:22:05 Ready to write response ...
	2025/11/23 08:22:14 Ready to marshal response ...
	2025/11/23 08:22:14 Ready to write response ...
	2025/11/23 08:22:15 Ready to marshal response ...
	2025/11/23 08:22:15 Ready to write response ...
	2025/11/23 08:22:15 Ready to marshal response ...
	2025/11/23 08:22:15 Ready to write response ...
	2025/11/23 08:22:27 Ready to marshal response ...
	2025/11/23 08:22:27 Ready to write response ...
	2025/11/23 08:22:52 Ready to marshal response ...
	2025/11/23 08:22:52 Ready to write response ...
	2025/11/23 08:24:40 Ready to marshal response ...
	2025/11/23 08:24:40 Ready to write response ...
	
	
	==> kernel <==
	 08:24:41 up  1:07,  0 user,  load average: 0.55, 1.07, 0.88
	Linux addons-450053 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790] <==
	I1123 08:22:32.893110       1 main.go:301] handling current node
	I1123 08:22:42.893890       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:22:42.893925       1 main.go:301] handling current node
	I1123 08:22:52.892586       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:22:52.892615       1 main.go:301] handling current node
	I1123 08:23:02.893126       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:23:02.893163       1 main.go:301] handling current node
	I1123 08:23:12.894909       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:23:12.894946       1 main.go:301] handling current node
	I1123 08:23:22.892585       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:23:22.892624       1 main.go:301] handling current node
	I1123 08:23:32.895630       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:23:32.895659       1 main.go:301] handling current node
	I1123 08:23:42.894988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:23:42.895036       1 main.go:301] handling current node
	I1123 08:23:52.892620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:23:52.892684       1 main.go:301] handling current node
	I1123 08:24:02.892657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:24:02.892689       1 main.go:301] handling current node
	I1123 08:24:12.894789       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:24:12.894820       1 main.go:301] handling current node
	I1123 08:24:22.894988       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:24:22.895034       1 main.go:301] handling current node
	I1123 08:24:32.900239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:24:32.900270       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635] <==
	E1123 08:21:03.160701       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.197.244:443: connect: connection refused" logger="UnhandledError"
	W1123 08:21:03.181288       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.197.244:443: connect: connection refused
	E1123 08:21:03.181329       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.197.244:443: connect: connection refused" logger="UnhandledError"
	W1123 08:21:03.182273       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.197.244:443: connect: connection refused
	E1123 08:21:03.182310       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.197.244:443: connect: connection refused" logger="UnhandledError"
	W1123 08:21:06.667299       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:21:06.667370       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	E1123 08:21:06.667483       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 08:21:06.668111       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	E1123 08:21:06.673539       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	E1123 08:21:06.694571       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	I1123 08:21:06.771359       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1123 08:21:18.217928       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:21:18.224263       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:21:18.237412       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:21:18.244147       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1123 08:22:04.612053       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45424: use of closed network connection
	E1123 08:22:04.758272       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45442: use of closed network connection
	I1123 08:22:15.409079       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1123 08:22:15.591067       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.11.18"}
	I1123 08:22:38.271401       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1123 08:24:40.185776       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.253.221"}
	
	
	==> kube-controller-manager [3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18] <==
	I1123 08:20:50.096083       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:20:50.096092       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:20:50.096048       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:20:50.097385       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:20:50.097426       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:20:50.097470       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:20:50.097488       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:20:50.097522       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:20:50.097537       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:20:50.097560       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:20:50.097915       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:20:50.098198       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:20:50.099835       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:20:50.100841       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:20:50.100855       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:20:50.100929       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:20:50.113199       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:20:50.113209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 08:20:52.217373       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1123 08:21:05.035944       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1123 08:21:20.105383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 08:21:20.105427       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 08:21:20.119739       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 08:21:20.206233       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:21:20.220687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7] <==
	I1123 08:20:52.498843       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:20:52.583496       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:20:52.683865       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:20:52.683893       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:20:52.684018       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:20:52.709134       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:20:52.709199       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:20:52.716278       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:20:52.717462       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:20:52.717548       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:20:52.719873       1 config.go:200] "Starting service config controller"
	I1123 08:20:52.719899       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:20:52.719939       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:20:52.719945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:20:52.719982       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:20:52.719988       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:20:52.720803       1 config.go:309] "Starting node config controller"
	I1123 08:20:52.720850       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:20:52.720907       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:20:52.821009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:20:52.821082       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:20:52.821136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29] <==
	E1123 08:20:43.125208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:20:43.125317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:20:43.125351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:20:43.125406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:20:43.125506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:20:43.126232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:20:43.126238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:20:43.126304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:20:43.126300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:20:43.126375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:20:43.126393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:20:43.126423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:20:43.126430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:20:43.126476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:20:43.946157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:20:43.985432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:20:44.029884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:20:44.030774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:20:44.179823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:20:44.211952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:20:44.223951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:20:44.262531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:20:44.285648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:20:44.326219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1123 08:20:46.322460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:22:54 addons-450053 kubelet[1287]: I1123 08:22:54.125720    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/task-pv-pod-restore" podStartSLOduration=0.998915208 podStartE2EDuration="2.125699739s" podCreationTimestamp="2025-11-23 08:22:52 +0000 UTC" firstStartedPulling="2025-11-23 08:22:52.832959398 +0000 UTC m=+127.354620696" lastFinishedPulling="2025-11-23 08:22:53.959743921 +0000 UTC m=+128.481405227" observedRunningTime="2025-11-23 08:22:54.124734447 +0000 UTC m=+128.646395768" watchObservedRunningTime="2025-11-23 08:22:54.125699739 +0000 UTC m=+128.647361058"
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.557062    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hpnrm" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.782721    1287 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9a42afbb-c845-11f0-ac1b-f658b5f1dc48\") pod \"2581e5ad-5af2-4d6f-8350-916672831706\" (UID: \"2581e5ad-5af2-4d6f-8350-916672831706\") "
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.782768    1287 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2581e5ad-5af2-4d6f-8350-916672831706-gcp-creds\") pod \"2581e5ad-5af2-4d6f-8350-916672831706\" (UID: \"2581e5ad-5af2-4d6f-8350-916672831706\") "
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.782799    1287 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h9l7k\" (UniqueName: \"kubernetes.io/projected/2581e5ad-5af2-4d6f-8350-916672831706-kube-api-access-h9l7k\") pod \"2581e5ad-5af2-4d6f-8350-916672831706\" (UID: \"2581e5ad-5af2-4d6f-8350-916672831706\") "
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.782866    1287 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2581e5ad-5af2-4d6f-8350-916672831706-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "2581e5ad-5af2-4d6f-8350-916672831706" (UID: "2581e5ad-5af2-4d6f-8350-916672831706"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.782999    1287 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2581e5ad-5af2-4d6f-8350-916672831706-gcp-creds\") on node \"addons-450053\" DevicePath \"\""
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.785245    1287 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2581e5ad-5af2-4d6f-8350-916672831706-kube-api-access-h9l7k" (OuterVolumeSpecName: "kube-api-access-h9l7k") pod "2581e5ad-5af2-4d6f-8350-916672831706" (UID: "2581e5ad-5af2-4d6f-8350-916672831706"). InnerVolumeSpecName "kube-api-access-h9l7k". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.785847    1287 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^9a42afbb-c845-11f0-ac1b-f658b5f1dc48" (OuterVolumeSpecName: "task-pv-storage") pod "2581e5ad-5af2-4d6f-8350-916672831706" (UID: "2581e5ad-5af2-4d6f-8350-916672831706"). InnerVolumeSpecName "pvc-ae052ae5-76df-43cb-9672-9fe0e53220f3". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.883606    1287 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h9l7k\" (UniqueName: \"kubernetes.io/projected/2581e5ad-5af2-4d6f-8350-916672831706-kube-api-access-h9l7k\") on node \"addons-450053\" DevicePath \"\""
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.883681    1287 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-ae052ae5-76df-43cb-9672-9fe0e53220f3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9a42afbb-c845-11f0-ac1b-f658b5f1dc48\") on node \"addons-450053\" "
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.887831    1287 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-ae052ae5-76df-43cb-9672-9fe0e53220f3" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^9a42afbb-c845-11f0-ac1b-f658b5f1dc48") on node "addons-450053"
	Nov 23 08:23:00 addons-450053 kubelet[1287]: I1123 08:23:00.985053    1287 reconciler_common.go:299] "Volume detached for volume \"pvc-ae052ae5-76df-43cb-9672-9fe0e53220f3\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^9a42afbb-c845-11f0-ac1b-f658b5f1dc48\") on node \"addons-450053\" DevicePath \"\""
	Nov 23 08:23:01 addons-450053 kubelet[1287]: I1123 08:23:01.145412    1287 scope.go:117] "RemoveContainer" containerID="22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552"
	Nov 23 08:23:01 addons-450053 kubelet[1287]: I1123 08:23:01.155368    1287 scope.go:117] "RemoveContainer" containerID="22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552"
	Nov 23 08:23:01 addons-450053 kubelet[1287]: E1123 08:23:01.155784    1287 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552\": container with ID starting with 22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552 not found: ID does not exist" containerID="22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552"
	Nov 23 08:23:01 addons-450053 kubelet[1287]: I1123 08:23:01.155838    1287 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552"} err="failed to get container status \"22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552\": rpc error: code = NotFound desc = could not find container \"22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552\": container with ID starting with 22ee677972d46c75a388213d3e7f501ba68c6c91574157dc98a4b389167c1552 not found: ID does not exist"
	Nov 23 08:23:01 addons-450053 kubelet[1287]: I1123 08:23:01.560385    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2581e5ad-5af2-4d6f-8350-916672831706" path="/var/lib/kubelet/pods/2581e5ad-5af2-4d6f-8350-916672831706/volumes"
	Nov 23 08:23:06 addons-450053 kubelet[1287]: E1123 08:23:06.182659    1287 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-gvfgs" podUID="b765d6ad-2418-44c7-9da3-fb58dc143860"
	Nov 23 08:23:22 addons-450053 kubelet[1287]: I1123 08:23:22.242205    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-gvfgs" podStartSLOduration=149.717630638 podStartE2EDuration="2m31.242186927s" podCreationTimestamp="2025-11-23 08:20:51 +0000 UTC" firstStartedPulling="2025-11-23 08:23:20.581189771 +0000 UTC m=+155.102851070" lastFinishedPulling="2025-11-23 08:23:22.105746061 +0000 UTC m=+156.627407359" observedRunningTime="2025-11-23 08:23:22.241956208 +0000 UTC m=+156.763617528" watchObservedRunningTime="2025-11-23 08:23:22.242186927 +0000 UTC m=+156.763848246"
	Nov 23 08:23:52 addons-450053 kubelet[1287]: I1123 08:23:52.557412    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-l5z45" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:23:58 addons-450053 kubelet[1287]: I1123 08:23:58.557494    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-625vc" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:24:21 addons-450053 kubelet[1287]: I1123 08:24:21.556802    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-hpnrm" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:24:40 addons-450053 kubelet[1287]: I1123 08:24:40.220692    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjrsr\" (UniqueName: \"kubernetes.io/projected/e6dd7e5c-1103-4275-9601-8b2eed7734ba-kube-api-access-wjrsr\") pod \"hello-world-app-5d498dc89-bhsnh\" (UID: \"e6dd7e5c-1103-4275-9601-8b2eed7734ba\") " pod="default/hello-world-app-5d498dc89-bhsnh"
	Nov 23 08:24:40 addons-450053 kubelet[1287]: I1123 08:24:40.220786    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e6dd7e5c-1103-4275-9601-8b2eed7734ba-gcp-creds\") pod \"hello-world-app-5d498dc89-bhsnh\" (UID: \"e6dd7e5c-1103-4275-9601-8b2eed7734ba\") " pod="default/hello-world-app-5d498dc89-bhsnh"
	
	
	==> storage-provisioner [4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473] <==
	W1123 08:24:16.410554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:18.413908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:18.418711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:20.421803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:20.425748       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:22.428809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:22.432845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:24.436285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:24.441265       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:26.444618       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:26.448702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:28.451546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:28.455247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:30.458441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:30.463146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:32.466191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:32.470432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:34.473814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:34.477472       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:36.480692       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:36.485991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:38.489105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:38.493327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:40.496883       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:24:40.501296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-450053 -n addons-450053
helpers_test.go:269: (dbg) Run:  kubectl --context addons-450053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: hello-world-app-5d498dc89-bhsnh ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-450053 describe pod hello-world-app-5d498dc89-bhsnh ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-450053 describe pod hello-world-app-5d498dc89-bhsnh ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n: exit status 1 (71.549807ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-5d498dc89-bhsnh
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-450053/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:24:40 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=5d498dc89
	Annotations:      <none>
	Status:           Running
	IP:               10.244.0.32
	IPs:
	  IP:           10.244.0.32
	Controlled By:  ReplicaSet/hello-world-app-5d498dc89
	Containers:
	  hello-world-app:
	    Container ID:   cri-o://b84677ef1d1c53517f220454fa3c6e5fe364ccc761e32cfa732ccfb41bee1aa0
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Running
	      Started:      Sun, 23 Nov 2025 08:24:41 +0000
	    Ready:          True
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wjrsr (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       True 
	  ContainersReady             True 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wjrsr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  2s    default-scheduler  Successfully assigned default/hello-world-app-5d498dc89-bhsnh to addons-450053
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"
	  Normal  Pulled     1s    kubelet            Successfully pulled image "docker.io/kicbase/echo-server:1.0" in 1.241s (1.241s including waiting). Image size: 4944818 bytes.
	  Normal  Created    1s    kubelet            Created container: hello-world-app
	  Normal  Started    1s    kubelet            Started container hello-world-app

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ll2hd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5fm9n" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-450053 describe pod hello-world-app-5d498dc89-bhsnh ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (251.064847ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:24:42.647703  123001 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:24:42.647991  123001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:24:42.648004  123001 out.go:374] Setting ErrFile to fd 2...
	I1123 08:24:42.648009  123001 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:24:42.648203  123001 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:24:42.648458  123001 mustload.go:66] Loading cluster: addons-450053
	I1123 08:24:42.648753  123001 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:24:42.648767  123001 addons.go:622] checking whether the cluster is paused
	I1123 08:24:42.648842  123001 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:24:42.648859  123001 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:24:42.649323  123001 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:24:42.667184  123001 ssh_runner.go:195] Run: systemctl --version
	I1123 08:24:42.667232  123001 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:24:42.686448  123001 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:24:42.787677  123001 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:24:42.787754  123001 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:24:42.816444  123001 cri.go:89] found id: "b23a63ebc59aa2e2502c33b1df9342c8f120d91f07321751341008ea2afcefe7"
	I1123 08:24:42.816464  123001 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:24:42.816468  123001 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:24:42.816471  123001 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:24:42.816474  123001 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:24:42.816478  123001 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:24:42.816481  123001 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:24:42.816483  123001 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:24:42.816486  123001 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:24:42.816491  123001 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:24:42.816494  123001 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:24:42.816497  123001 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:24:42.816500  123001 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:24:42.816503  123001 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:24:42.816511  123001 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:24:42.816522  123001 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:24:42.816525  123001 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:24:42.816538  123001 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:24:42.816542  123001 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:24:42.816545  123001 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:24:42.816547  123001 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:24:42.816558  123001 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:24:42.816563  123001 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:24:42.816566  123001 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:24:42.816569  123001 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:24:42.816572  123001 cri.go:89] found id: ""
	I1123 08:24:42.816610  123001 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:24:42.830694  123001 out.go:203] 
	W1123 08:24:42.832000  123001 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:24:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:24:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:24:42.832019  123001 out.go:285] * 
	* 
	W1123 08:24:42.835205  123001 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:24:42.836283  123001 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable ingress --alsologtostderr -v=1: exit status 11 (242.847094ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:24:42.896038  123064 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:24:42.896198  123064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:24:42.896211  123064 out.go:374] Setting ErrFile to fd 2...
	I1123 08:24:42.896215  123064 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:24:42.896415  123064 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:24:42.896686  123064 mustload.go:66] Loading cluster: addons-450053
	I1123 08:24:42.897114  123064 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:24:42.897139  123064 addons.go:622] checking whether the cluster is paused
	I1123 08:24:42.897246  123064 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:24:42.897267  123064 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:24:42.897669  123064 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:24:42.915036  123064 ssh_runner.go:195] Run: systemctl --version
	I1123 08:24:42.915093  123064 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:24:42.932680  123064 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:24:43.031585  123064 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:24:43.031681  123064 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:24:43.059887  123064 cri.go:89] found id: "b23a63ebc59aa2e2502c33b1df9342c8f120d91f07321751341008ea2afcefe7"
	I1123 08:24:43.059906  123064 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:24:43.059910  123064 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:24:43.059916  123064 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:24:43.059920  123064 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:24:43.059926  123064 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:24:43.059932  123064 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:24:43.059936  123064 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:24:43.059941  123064 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:24:43.059953  123064 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:24:43.059957  123064 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:24:43.060068  123064 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:24:43.060075  123064 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:24:43.060080  123064 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:24:43.060086  123064 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:24:43.060095  123064 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:24:43.060105  123064 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:24:43.060113  123064 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:24:43.060118  123064 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:24:43.060122  123064 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:24:43.060133  123064 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:24:43.060141  123064 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:24:43.060146  123064 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:24:43.060153  123064 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:24:43.060165  123064 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:24:43.060173  123064 cri.go:89] found id: ""
	I1123 08:24:43.060223  123064 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:24:43.073569  123064 out.go:203] 
	W1123 08:24:43.074851  123064 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:24:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:24:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:24:43.074877  123064 out.go:285] * 
	* 
	W1123 08:24:43.078039  123064 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:24:43.079382  123064 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (147.92s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-mblm5" [2ecf8ab5-a26d-48ac-a8d8-39b6d7e7eac2] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.002869528s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (256.32367ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:22.653049  119460 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:22.653193  119460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:22.653204  119460 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:22.653209  119460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:22.653417  119460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:22.653755  119460 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:22.654126  119460 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:22.654145  119460 addons.go:622] checking whether the cluster is paused
	I1123 08:22:22.654254  119460 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:22.654278  119460 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:22.654686  119460 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:22.672039  119460 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:22.672096  119460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:22.689845  119460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:22.789937  119460 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:22.790059  119460 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:22.818763  119460 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:22.818782  119460 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:22.818787  119460 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:22.818790  119460 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:22.818793  119460 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:22.818797  119460 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:22.818799  119460 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:22.818802  119460 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:22.818805  119460 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:22.818810  119460 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:22.818815  119460 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:22.818819  119460 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:22.818823  119460 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:22.818828  119460 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:22.818833  119460 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:22.818844  119460 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:22.818852  119460 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:22.818858  119460 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:22.818863  119460 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:22.818877  119460 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:22.818882  119460 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:22.818885  119460 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:22.818888  119460 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:22.818890  119460 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:22.818893  119460 cri.go:89] found id: ""
	I1123 08:22:22.818939  119460 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:22.833680  119460 out.go:203] 
	W1123 08:22:22.834750  119460 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:22.834766  119460 out.go:285] * 
	* 
	W1123 08:22:22.837908  119460 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:22.839200  119460 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (6.26s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.460488ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00345599s
addons_test.go:463: (dbg) Run:  kubectl --context addons-450053 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (246.184946ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:16.390365  118624 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:16.390952  118624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:16.390981  118624 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:16.390989  118624 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:16.391457  118624 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:16.392074  118624 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:16.392437  118624 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:16.392453  118624 addons.go:622] checking whether the cluster is paused
	I1123 08:22:16.392533  118624 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:16.392549  118624 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:16.392916  118624 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:16.411402  118624 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:16.411460  118624 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:16.428904  118624 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:16.529580  118624 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:16.529698  118624 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:16.559204  118624 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:16.559227  118624 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:16.559231  118624 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:16.559235  118624 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:16.559237  118624 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:16.559241  118624 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:16.559245  118624 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:16.559249  118624 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:16.559254  118624 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:16.559263  118624 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:16.559268  118624 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:16.559273  118624 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:16.559277  118624 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:16.559282  118624 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:16.559290  118624 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:16.559297  118624 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:16.559305  118624 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:16.559310  118624 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:16.559315  118624 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:16.559320  118624 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:16.559323  118624 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:16.559326  118624 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:16.559328  118624 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:16.559340  118624 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:16.559346  118624 cri.go:89] found id: ""
	I1123 08:22:16.559391  118624 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:16.573130  118624 out.go:203] 
	W1123 08:22:16.574195  118624 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:16Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:16Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:16.574219  118624 out.go:285] * 
	* 
	W1123 08:22:16.577322  118624 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:16.578386  118624 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (6.31s)

                                                
                                    
x
+
TestAddons/parallel/CSI (49.06s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1123 08:22:12.923745  107234 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1123 08:22:12.927400  107234 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1123 08:22:12.927429  107234 kapi.go:107] duration metric: took 3.710569ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.723393ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-450053 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-450053 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [c1fe7e65-1a0e-4233-b804-92f8f707b7e3] Pending
helpers_test.go:352: "task-pv-pod" [c1fe7e65-1a0e-4233-b804-92f8f707b7e3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [c1fe7e65-1a0e-4233-b804-92f8f707b7e3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.003622158s
addons_test.go:572: (dbg) Run:  kubectl --context addons-450053 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-450053 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-450053 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-450053 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-450053 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-450053 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-450053 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [2581e5ad-5af2-4d6f-8350-916672831706] Pending
helpers_test.go:352: "task-pv-pod-restore" [2581e5ad-5af2-4d6f-8350-916672831706] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [2581e5ad-5af2-4d6f-8350-916672831706] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.00328667s
addons_test.go:614: (dbg) Run:  kubectl --context addons-450053 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-450053 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-450053 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (248.028153ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:23:01.540894  120794 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:23:01.541027  120794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:01.541037  120794 out.go:374] Setting ErrFile to fd 2...
	I1123 08:23:01.541044  120794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:01.541334  120794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:23:01.541649  120794 mustload.go:66] Loading cluster: addons-450053
	I1123 08:23:01.542136  120794 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:23:01.542157  120794 addons.go:622] checking whether the cluster is paused
	I1123 08:23:01.542280  120794 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:23:01.542300  120794 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:23:01.542822  120794 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:23:01.561414  120794 ssh_runner.go:195] Run: systemctl --version
	I1123 08:23:01.561478  120794 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:23:01.578690  120794 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:23:01.679543  120794 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:23:01.679650  120794 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:23:01.708692  120794 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:23:01.708739  120794 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:23:01.708745  120794 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:23:01.708750  120794 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:23:01.708753  120794 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:23:01.708757  120794 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:23:01.708760  120794 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:23:01.708772  120794 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:23:01.708776  120794 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:23:01.708790  120794 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:23:01.708800  120794 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:23:01.708804  120794 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:23:01.708809  120794 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:23:01.708817  120794 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:23:01.708822  120794 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:23:01.708842  120794 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:23:01.708852  120794 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:23:01.708857  120794 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:23:01.708862  120794 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:23:01.708866  120794 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:23:01.708874  120794 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:23:01.708882  120794 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:23:01.708887  120794 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:23:01.708892  120794 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:23:01.708899  120794 cri.go:89] found id: ""
	I1123 08:23:01.708978  120794 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:23:01.722831  120794 out.go:203] 
	W1123 08:23:01.724019  120794 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:23:01.724046  120794 out.go:285] * 
	* 
	W1123 08:23:01.727311  120794 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:23:01.728660  120794 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (243.87216ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:23:01.788771  120874 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:23:01.789051  120874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:01.789060  120874 out.go:374] Setting ErrFile to fd 2...
	I1123 08:23:01.789065  120874 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:23:01.789246  120874 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:23:01.789528  120874 mustload.go:66] Loading cluster: addons-450053
	I1123 08:23:01.789850  120874 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:23:01.789866  120874 addons.go:622] checking whether the cluster is paused
	I1123 08:23:01.789948  120874 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:23:01.789975  120874 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:23:01.790342  120874 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:23:01.807359  120874 ssh_runner.go:195] Run: systemctl --version
	I1123 08:23:01.807416  120874 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:23:01.824735  120874 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:23:01.924445  120874 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:23:01.924554  120874 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:23:01.953782  120874 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:23:01.953801  120874 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:23:01.953806  120874 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:23:01.953809  120874 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:23:01.953812  120874 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:23:01.953816  120874 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:23:01.953818  120874 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:23:01.953821  120874 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:23:01.953824  120874 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:23:01.953839  120874 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:23:01.953845  120874 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:23:01.953848  120874 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:23:01.953851  120874 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:23:01.953854  120874 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:23:01.953857  120874 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:23:01.953862  120874 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:23:01.953867  120874 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:23:01.953871  120874 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:23:01.953874  120874 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:23:01.953877  120874 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:23:01.953891  120874 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:23:01.953896  120874 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:23:01.953899  120874 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:23:01.953902  120874 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:23:01.953905  120874 cri.go:89] found id: ""
	I1123 08:23:01.953944  120874 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:23:01.967352  120874 out.go:203] 
	W1123 08:23:01.968540  120874 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:23:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:23:01.968560  120874 out.go:285] * 
	* 
	W1123 08:23:01.971972  120874 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:23:01.973109  120874 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (49.06s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.66s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-450053 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-450053 --alsologtostderr -v=1: exit status 11 (283.103337ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:05.081716  116792 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:05.082092  116792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:05.082104  116792 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:05.082108  116792 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:05.082314  116792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:05.082615  116792 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:05.082986  116792 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:05.083000  116792 addons.go:622] checking whether the cluster is paused
	I1123 08:22:05.083091  116792 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:05.083109  116792 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:05.083499  116792 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:05.103466  116792 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:05.103541  116792 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:05.122258  116792 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:05.225866  116792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:05.225958  116792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:05.259066  116792 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:05.259091  116792 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:05.259097  116792 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:05.259103  116792 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:05.259108  116792 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:05.259115  116792 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:05.259119  116792 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:05.259122  116792 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:05.259125  116792 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:05.259133  116792 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:05.259136  116792 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:05.259139  116792 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:05.259142  116792 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:05.259144  116792 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:05.259147  116792 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:05.259151  116792 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:05.259155  116792 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:05.259159  116792 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:05.259163  116792 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:05.259167  116792 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:05.259172  116792 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:05.259176  116792 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:05.259181  116792 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:05.259185  116792 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:05.259198  116792 cri.go:89] found id: ""
	I1123 08:22:05.259246  116792 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:05.276857  116792 out.go:203] 
	W1123 08:22:05.278240  116792 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:05.278269  116792 out.go:285] * 
	* 
	W1123 08:22:05.283111  116792 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:05.284896  116792 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-450053 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-450053
helpers_test.go:243: (dbg) docker inspect addons-450053:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b",
	        "Created": "2025-11-23T08:20:28.295158521Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 109264,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:20:28.326081012Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/hostname",
	        "HostsPath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/hosts",
	        "LogPath": "/var/lib/docker/containers/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b/439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b-json.log",
	        "Name": "/addons-450053",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "addons-450053:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-450053",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "439b1684c8e4e369ea75cdf25ddaf3fcff26600aaf3dce9c93db3462f4b8736b",
	                "LowerDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e9515a64ab879e78f20db4d5974939793e8d815710b31e0f1cec6f273213bc3f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "addons-450053",
	                "Source": "/var/lib/docker/volumes/addons-450053/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-450053",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-450053",
	                "name.minikube.sigs.k8s.io": "addons-450053",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "083e8641d527fe661d34cc5e7a4eba2580f777dd70174d55fc1409cddf766614",
	            "SandboxKey": "/var/run/docker/netns/083e8641d527",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-450053": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4cd69527c282009ee2878a3d65df6895580a4b156354d85d1f1be8ca8e937d8e",
	                    "EndpointID": "2d1aee6768803d88f67fd94ba3a11cfe5c0f51177cf1d1b679e4b3d4fe3c27a2",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "2e:13:5d:b6:84:6a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-450053",
	                        "439b1684c8e4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-450053 -n addons-450053
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-450053 logs -n 25: (1.125909519s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-874990 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-874990   │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │ 23 Nov 25 08:19 UTC │
	│ delete  │ -p download-only-874990                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-874990   │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │ 23 Nov 25 08:19 UTC │
	│ start   │ -o=json --download-only -p download-only-173580 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-173580   │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:20 UTC │
	│ delete  │ -p download-only-173580                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-173580   │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:20 UTC │
	│ delete  │ -p download-only-874990                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-874990   │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:20 UTC │
	│ delete  │ -p download-only-173580                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-173580   │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:20 UTC │
	│ start   │ --download-only -p download-docker-494850 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-494850 │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │                     │
	│ delete  │ -p download-docker-494850                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-494850 │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:20 UTC │
	│ start   │ --download-only -p binary-mirror-620131 --alsologtostderr --binary-mirror http://127.0.0.1:33645 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-620131   │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │                     │
	│ delete  │ -p binary-mirror-620131                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-620131   │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:20 UTC │
	│ addons  │ disable dashboard -p addons-450053                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-450053          │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │                     │
	│ addons  │ enable dashboard -p addons-450053                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-450053          │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │                     │
	│ start   │ -p addons-450053 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-450053          │ jenkins │ v1.37.0 │ 23 Nov 25 08:20 UTC │ 23 Nov 25 08:21 UTC │
	│ addons  │ addons-450053 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-450053          │ jenkins │ v1.37.0 │ 23 Nov 25 08:21 UTC │                     │
	│ addons  │ addons-450053 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-450053          │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	│ addons  │ enable headlamp -p addons-450053 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-450053          │ jenkins │ v1.37.0 │ 23 Nov 25 08:22 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:20:07
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:20:07.962581  108626 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:20:07.962692  108626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:20:07.962704  108626 out.go:374] Setting ErrFile to fd 2...
	I1123 08:20:07.962708  108626 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:20:07.962913  108626 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:20:07.963457  108626 out.go:368] Setting JSON to false
	I1123 08:20:07.964229  108626 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3748,"bootTime":1763882260,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:20:07.964281  108626 start.go:143] virtualization: kvm guest
	I1123 08:20:07.966143  108626 out.go:179] * [addons-450053] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:20:07.967403  108626 notify.go:221] Checking for updates...
	I1123 08:20:07.967424  108626 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:20:07.968976  108626 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:20:07.970272  108626 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:20:07.971410  108626 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:20:07.972471  108626 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:20:07.973729  108626 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:20:07.974936  108626 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:20:07.999075  108626 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:20:07.999177  108626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:20:08.060279  108626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-23 08:20:08.048877902 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:20:08.060495  108626 docker.go:319] overlay module found
	I1123 08:20:08.062263  108626 out.go:179] * Using the docker driver based on user configuration
	I1123 08:20:08.063684  108626 start.go:309] selected driver: docker
	I1123 08:20:08.063702  108626 start.go:927] validating driver "docker" against <nil>
	I1123 08:20:08.063715  108626 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:20:08.064233  108626 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:20:08.118613  108626 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-23 08:20:08.109286864 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:20:08.118760  108626 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:20:08.118984  108626 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:20:08.120776  108626 out.go:179] * Using Docker driver with root privileges
	I1123 08:20:08.122099  108626 cni.go:84] Creating CNI manager for ""
	I1123 08:20:08.122166  108626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:20:08.122178  108626 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:20:08.122267  108626 start.go:353] cluster config:
	{Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1123 08:20:08.123768  108626 out.go:179] * Starting "addons-450053" primary control-plane node in "addons-450053" cluster
	I1123 08:20:08.124911  108626 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:20:08.126197  108626 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:20:08.127449  108626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:20:08.127488  108626 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:20:08.127497  108626 cache.go:65] Caching tarball of preloaded images
	I1123 08:20:08.127538  108626 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:20:08.127614  108626 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 08:20:08.127629  108626 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 08:20:08.127990  108626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/config.json ...
	I1123 08:20:08.128031  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/config.json: {Name:mk274e1e607b83af9e40fd0d0cc8661c8ff49964 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:08.145310  108626 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:20:08.145433  108626 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:20:08.145450  108626 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:20:08.145455  108626 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:20:08.145465  108626 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:20:08.145469  108626 cache.go:176] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1123 08:20:20.462525  108626 cache.go:178] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1123 08:20:20.462582  108626 cache.go:243] Successfully downloaded all kic artifacts
	I1123 08:20:20.462651  108626 start.go:360] acquireMachinesLock for addons-450053: {Name:mk177bc578c2349bdc0093b5404d31df1a3bbdc2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 08:20:20.462784  108626 start.go:364] duration metric: took 102.758µs to acquireMachinesLock for "addons-450053"
	I1123 08:20:20.462814  108626 start.go:93] Provisioning new machine with config: &{Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:20:20.462913  108626 start.go:125] createHost starting for "" (driver="docker")
	I1123 08:20:20.464808  108626 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1123 08:20:20.465048  108626 start.go:159] libmachine.API.Create for "addons-450053" (driver="docker")
	I1123 08:20:20.465087  108626 client.go:173] LocalClient.Create starting
	I1123 08:20:20.465191  108626 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem
	I1123 08:20:20.574868  108626 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem
	I1123 08:20:20.667339  108626 cli_runner.go:164] Run: docker network inspect addons-450053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 08:20:20.686522  108626 cli_runner.go:211] docker network inspect addons-450053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 08:20:20.686607  108626 network_create.go:284] running [docker network inspect addons-450053] to gather additional debugging logs...
	I1123 08:20:20.686627  108626 cli_runner.go:164] Run: docker network inspect addons-450053
	W1123 08:20:20.702688  108626 cli_runner.go:211] docker network inspect addons-450053 returned with exit code 1
	I1123 08:20:20.702724  108626 network_create.go:287] error running [docker network inspect addons-450053]: docker network inspect addons-450053: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-450053 not found
	I1123 08:20:20.702742  108626 network_create.go:289] output of [docker network inspect addons-450053]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-450053 not found
	
	** /stderr **
	I1123 08:20:20.702845  108626 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:20:20.719295  108626 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00151b330}
	I1123 08:20:20.719339  108626 network_create.go:124] attempt to create docker network addons-450053 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1123 08:20:20.719397  108626 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-450053 addons-450053
	I1123 08:20:20.769669  108626 network_create.go:108] docker network addons-450053 192.168.49.0/24 created
	I1123 08:20:20.769701  108626 kic.go:121] calculated static IP "192.168.49.2" for the "addons-450053" container
	I1123 08:20:20.769773  108626 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 08:20:20.786426  108626 cli_runner.go:164] Run: docker volume create addons-450053 --label name.minikube.sigs.k8s.io=addons-450053 --label created_by.minikube.sigs.k8s.io=true
	I1123 08:20:20.804065  108626 oci.go:103] Successfully created a docker volume addons-450053
	I1123 08:20:20.804155  108626 cli_runner.go:164] Run: docker run --rm --name addons-450053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-450053 --entrypoint /usr/bin/test -v addons-450053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 08:20:23.952600  108626 cli_runner.go:217] Completed: docker run --rm --name addons-450053-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-450053 --entrypoint /usr/bin/test -v addons-450053:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (3.148382796s)
	I1123 08:20:23.952643  108626 oci.go:107] Successfully prepared a docker volume addons-450053
	I1123 08:20:23.952692  108626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:20:23.952708  108626 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 08:20:23.952779  108626 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-450053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 08:20:28.223743  108626 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-450053:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.270913467s)
	I1123 08:20:28.223778  108626 kic.go:203] duration metric: took 4.271067025s to extract preloaded images to volume ...
	W1123 08:20:28.223886  108626 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 08:20:28.223918  108626 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 08:20:28.223990  108626 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 08:20:28.279836  108626 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-450053 --name addons-450053 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-450053 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-450053 --network addons-450053 --ip 192.168.49.2 --volume addons-450053:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 08:20:28.589572  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Running}}
	I1123 08:20:28.607516  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:28.626032  108626 cli_runner.go:164] Run: docker exec addons-450053 stat /var/lib/dpkg/alternatives/iptables
	I1123 08:20:28.673268  108626 oci.go:144] the created container "addons-450053" has a running status.
	I1123 08:20:28.673301  108626 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa...
	I1123 08:20:28.702842  108626 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 08:20:28.727506  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:28.752192  108626 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 08:20:28.752214  108626 kic_runner.go:114] Args: [docker exec --privileged addons-450053 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 08:20:28.799824  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:28.820861  108626 machine.go:94] provisionDockerMachine start ...
	I1123 08:20:28.821004  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:28.841251  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:28.841502  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:28.841515  108626 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 08:20:28.842898  108626 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:53798->127.0.0.1:32768: read: connection reset by peer
	I1123 08:20:31.988052  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-450053
	
	I1123 08:20:31.988083  108626 ubuntu.go:182] provisioning hostname "addons-450053"
	I1123 08:20:31.988154  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.006140  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:32.006362  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:32.006376  108626 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-450053 && echo "addons-450053" | sudo tee /etc/hostname
	I1123 08:20:32.158014  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-450053
	
	I1123 08:20:32.158087  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.175328  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:32.175530  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:32.175546  108626 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-450053' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-450053/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-450053' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 08:20:32.317880  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 08:20:32.317914  108626 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 08:20:32.317945  108626 ubuntu.go:190] setting up certificates
	I1123 08:20:32.317987  108626 provision.go:84] configureAuth start
	I1123 08:20:32.318067  108626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-450053
	I1123 08:20:32.336704  108626 provision.go:143] copyHostCerts
	I1123 08:20:32.336779  108626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 08:20:32.336908  108626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 08:20:32.336988  108626 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 08:20:32.337059  108626 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.addons-450053 san=[127.0.0.1 192.168.49.2 addons-450053 localhost minikube]
	I1123 08:20:32.413474  108626 provision.go:177] copyRemoteCerts
	I1123 08:20:32.413532  108626 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 08:20:32.413568  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.431550  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:32.532166  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 08:20:32.551728  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 08:20:32.568638  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 08:20:32.585290  108626 provision.go:87] duration metric: took 267.278941ms to configureAuth
	I1123 08:20:32.585325  108626 ubuntu.go:206] setting minikube options for container-runtime
	I1123 08:20:32.585512  108626 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:20:32.585620  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.603673  108626 main.go:143] libmachine: Using SSH client type: native
	I1123 08:20:32.603928  108626 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1123 08:20:32.603956  108626 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 08:20:32.883315  108626 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 08:20:32.883339  108626 machine.go:97] duration metric: took 4.062439878s to provisionDockerMachine
	I1123 08:20:32.883349  108626 client.go:176] duration metric: took 12.41825642s to LocalClient.Create
	I1123 08:20:32.883368  108626 start.go:167] duration metric: took 12.418322338s to libmachine.API.Create "addons-450053"
	I1123 08:20:32.883375  108626 start.go:293] postStartSetup for "addons-450053" (driver="docker")
	I1123 08:20:32.883385  108626 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 08:20:32.883435  108626 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 08:20:32.883473  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:32.901171  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.003767  108626 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 08:20:33.007202  108626 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 08:20:33.007237  108626 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 08:20:33.007251  108626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 08:20:33.007310  108626 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 08:20:33.007334  108626 start.go:296] duration metric: took 123.952679ms for postStartSetup
	I1123 08:20:33.007624  108626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-450053
	I1123 08:20:33.025023  108626 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/config.json ...
	I1123 08:20:33.025363  108626 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:20:33.025420  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:33.042114  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.140113  108626 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 08:20:33.144701  108626 start.go:128] duration metric: took 12.681769644s to createHost
	I1123 08:20:33.144729  108626 start.go:83] releasing machines lock for "addons-450053", held for 12.681929129s
	I1123 08:20:33.144803  108626 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-450053
	I1123 08:20:33.163635  108626 ssh_runner.go:195] Run: cat /version.json
	I1123 08:20:33.163683  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:33.163719  108626 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 08:20:33.163792  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:33.183067  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.183067  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:33.333934  108626 ssh_runner.go:195] Run: systemctl --version
	I1123 08:20:33.340229  108626 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 08:20:33.373560  108626 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 08:20:33.377946  108626 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 08:20:33.378051  108626 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 08:20:33.402952  108626 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 08:20:33.402989  108626 start.go:496] detecting cgroup driver to use...
	I1123 08:20:33.403024  108626 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 08:20:33.403069  108626 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 08:20:33.418807  108626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 08:20:33.430720  108626 docker.go:218] disabling cri-docker service (if available) ...
	I1123 08:20:33.430772  108626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 08:20:33.446267  108626 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 08:20:33.462706  108626 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 08:20:33.543601  108626 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 08:20:33.627527  108626 docker.go:234] disabling docker service ...
	I1123 08:20:33.627587  108626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 08:20:33.646200  108626 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 08:20:33.658410  108626 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 08:20:33.740893  108626 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 08:20:33.822686  108626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 08:20:33.835050  108626 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 08:20:33.848178  108626 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 08:20:33.848235  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.857641  108626 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 08:20:33.857706  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.866518  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.875329  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.883661  108626 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 08:20:33.891437  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.899669  108626 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.913120  108626 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 08:20:33.921983  108626 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 08:20:33.930259  108626 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 08:20:33.937744  108626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:20:34.012544  108626 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 08:20:34.152368  108626 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 08:20:34.152466  108626 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 08:20:34.156504  108626 start.go:564] Will wait 60s for crictl version
	I1123 08:20:34.156569  108626 ssh_runner.go:195] Run: which crictl
	I1123 08:20:34.160041  108626 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 08:20:34.185848  108626 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 08:20:34.185940  108626 ssh_runner.go:195] Run: crio --version
	I1123 08:20:34.213792  108626 ssh_runner.go:195] Run: crio --version
	I1123 08:20:34.245074  108626 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 08:20:34.246171  108626 cli_runner.go:164] Run: docker network inspect addons-450053 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 08:20:34.264304  108626 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1123 08:20:34.268607  108626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:20:34.278765  108626 kubeadm.go:884] updating cluster {Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 08:20:34.278880  108626 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:20:34.278930  108626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:20:34.310145  108626 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:20:34.310175  108626 crio.go:433] Images already preloaded, skipping extraction
	I1123 08:20:34.310229  108626 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 08:20:34.336015  108626 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 08:20:34.336038  108626 cache_images.go:86] Images are preloaded, skipping loading
	I1123 08:20:34.336048  108626 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1123 08:20:34.336187  108626 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-450053 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 08:20:34.336274  108626 ssh_runner.go:195] Run: crio config
	I1123 08:20:34.378790  108626 cni.go:84] Creating CNI manager for ""
	I1123 08:20:34.378807  108626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:20:34.378827  108626 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 08:20:34.378850  108626 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-450053 NodeName:addons-450053 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 08:20:34.379007  108626 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-450053"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 08:20:34.379065  108626 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 08:20:34.387171  108626 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 08:20:34.387233  108626 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 08:20:34.394757  108626 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1123 08:20:34.406815  108626 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 08:20:34.421412  108626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1123 08:20:34.434117  108626 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1123 08:20:34.437638  108626 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 08:20:34.446915  108626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:20:34.523495  108626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:20:34.550351  108626 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053 for IP: 192.168.49.2
	I1123 08:20:34.550391  108626 certs.go:195] generating shared ca certs ...
	I1123 08:20:34.550407  108626 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.550522  108626 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 08:20:34.631919  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt ...
	I1123 08:20:34.631948  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt: {Name:mk1d675d529f1bcc6a221325ecb3a430ae98eb0e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.632137  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key ...
	I1123 08:20:34.632151  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key: {Name:mk6bf8fbad88d6534617d5f3156d47b7090962e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.632225  108626 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 08:20:34.762285  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt ...
	I1123 08:20:34.762317  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt: {Name:mk2298706c07912f22208981415546e9068687dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.762489  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key ...
	I1123 08:20:34.762501  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key: {Name:mk7c4cc9c3cf8070eb9b93fc403c104fbd5f1451 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.762568  108626 certs.go:257] generating profile certs ...
	I1123 08:20:34.762629  108626 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.key
	I1123 08:20:34.762643  108626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt with IP's: []
	I1123 08:20:34.806101  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt ...
	I1123 08:20:34.806142  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: {Name:mkadea70419b612e10ee90d8d53591fa9403899c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.806303  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.key ...
	I1123 08:20:34.806315  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.key: {Name:mk942aad5e59dc1c80fcad11319c8264450eab2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.806388  108626 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3
	I1123 08:20:34.806406  108626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1123 08:20:34.912467  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3 ...
	I1123 08:20:34.912496  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3: {Name:mk3cabba9cf634dbae747254f7448b700b363155 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.912653  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3 ...
	I1123 08:20:34.912667  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3: {Name:mk71693029136312bbc24afc86fa26f6c7d155a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.912734  108626 certs.go:382] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt.70e65df3 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt
	I1123 08:20:34.912810  108626 certs.go:386] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key.70e65df3 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key
	I1123 08:20:34.912858  108626 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key
	I1123 08:20:34.912873  108626 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt with IP's: []
	I1123 08:20:34.929762  108626 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt ...
	I1123 08:20:34.929780  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt: {Name:mk426e0261f644a97ff6e2c4d1cb31f04350a9a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.929894  108626 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key ...
	I1123 08:20:34.929909  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key: {Name:mkbe68a9be26abcd20e0ac51b23b6695c01dfa81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:34.930101  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 08:20:34.930136  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 08:20:34.930162  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 08:20:34.930195  108626 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 08:20:34.930701  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 08:20:34.948789  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 08:20:34.965754  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 08:20:34.983196  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 08:20:35.000826  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 08:20:35.018158  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 08:20:35.035770  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 08:20:35.052886  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 08:20:35.069839  108626 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 08:20:35.089392  108626 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 08:20:35.101630  108626 ssh_runner.go:195] Run: openssl version
	I1123 08:20:35.107646  108626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 08:20:35.118644  108626 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:20:35.122307  108626 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:20:35.122368  108626 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 08:20:35.155780  108626 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 08:20:35.164557  108626 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 08:20:35.168326  108626 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 08:20:35.168384  108626 kubeadm.go:401] StartCluster: {Name:addons-450053 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-450053 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:20:35.168476  108626 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:20:35.168523  108626 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:20:35.195540  108626 cri.go:89] found id: ""
	I1123 08:20:35.195604  108626 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 08:20:35.203858  108626 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 08:20:35.211955  108626 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 08:20:35.212037  108626 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 08:20:35.219756  108626 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 08:20:35.219775  108626 kubeadm.go:158] found existing configuration files:
	
	I1123 08:20:35.219888  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 08:20:35.227385  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 08:20:35.227442  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 08:20:35.234750  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 08:20:35.242012  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 08:20:35.242068  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 08:20:35.249190  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 08:20:35.256480  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 08:20:35.256540  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 08:20:35.263580  108626 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 08:20:35.271258  108626 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 08:20:35.271324  108626 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 08:20:35.278564  108626 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 08:20:35.336219  108626 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 08:20:35.390486  108626 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 08:20:46.331818  108626 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 08:20:46.331894  108626 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 08:20:46.332050  108626 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 08:20:46.332113  108626 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 08:20:46.332145  108626 kubeadm.go:319] OS: Linux
	I1123 08:20:46.332207  108626 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 08:20:46.332276  108626 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 08:20:46.332371  108626 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 08:20:46.332425  108626 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 08:20:46.332504  108626 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 08:20:46.332578  108626 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 08:20:46.332629  108626 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 08:20:46.332667  108626 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 08:20:46.332779  108626 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 08:20:46.332906  108626 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 08:20:46.333030  108626 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 08:20:46.333114  108626 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 08:20:46.335290  108626 out.go:252]   - Generating certificates and keys ...
	I1123 08:20:46.335364  108626 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 08:20:46.335438  108626 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 08:20:46.335517  108626 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 08:20:46.335592  108626 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 08:20:46.335679  108626 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 08:20:46.335755  108626 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 08:20:46.335821  108626 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 08:20:46.336000  108626 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-450053 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:20:46.336087  108626 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 08:20:46.336231  108626 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-450053 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1123 08:20:46.336309  108626 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 08:20:46.336407  108626 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 08:20:46.336448  108626 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 08:20:46.336533  108626 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 08:20:46.336617  108626 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 08:20:46.336729  108626 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 08:20:46.336813  108626 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 08:20:46.336916  108626 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 08:20:46.337010  108626 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 08:20:46.337120  108626 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 08:20:46.337212  108626 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 08:20:46.338366  108626 out.go:252]   - Booting up control plane ...
	I1123 08:20:46.338468  108626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 08:20:46.338559  108626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 08:20:46.338619  108626 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 08:20:46.338781  108626 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 08:20:46.338908  108626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 08:20:46.339064  108626 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 08:20:46.339184  108626 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 08:20:46.339224  108626 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 08:20:46.339387  108626 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 08:20:46.339539  108626 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 08:20:46.339625  108626 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501307511s
	I1123 08:20:46.339757  108626 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 08:20:46.339824  108626 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1123 08:20:46.339924  108626 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 08:20:46.340064  108626 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 08:20:46.340153  108626 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.633567907s
	I1123 08:20:46.340210  108626 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.208377193s
	I1123 08:20:46.340267  108626 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002036688s
	I1123 08:20:46.340374  108626 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 08:20:46.340503  108626 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 08:20:46.340577  108626 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 08:20:46.340844  108626 kubeadm.go:319] [mark-control-plane] Marking the node addons-450053 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 08:20:46.340929  108626 kubeadm.go:319] [bootstrap-token] Using token: dg55x4.9vphqzd2ayx2cukh
	I1123 08:20:46.342119  108626 out.go:252]   - Configuring RBAC rules ...
	I1123 08:20:46.342208  108626 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 08:20:46.342280  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 08:20:46.342415  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 08:20:46.342552  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 08:20:46.342696  108626 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 08:20:46.342767  108626 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 08:20:46.342859  108626 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 08:20:46.342903  108626 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 08:20:46.342946  108626 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 08:20:46.342956  108626 kubeadm.go:319] 
	I1123 08:20:46.343015  108626 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 08:20:46.343023  108626 kubeadm.go:319] 
	I1123 08:20:46.343091  108626 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 08:20:46.343100  108626 kubeadm.go:319] 
	I1123 08:20:46.343128  108626 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 08:20:46.343182  108626 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 08:20:46.343230  108626 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 08:20:46.343236  108626 kubeadm.go:319] 
	I1123 08:20:46.343281  108626 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 08:20:46.343287  108626 kubeadm.go:319] 
	I1123 08:20:46.343327  108626 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 08:20:46.343336  108626 kubeadm.go:319] 
	I1123 08:20:46.343386  108626 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 08:20:46.343448  108626 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 08:20:46.343503  108626 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 08:20:46.343513  108626 kubeadm.go:319] 
	I1123 08:20:46.343598  108626 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 08:20:46.343693  108626 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 08:20:46.343705  108626 kubeadm.go:319] 
	I1123 08:20:46.343796  108626 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dg55x4.9vphqzd2ayx2cukh \
	I1123 08:20:46.343919  108626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 \
	I1123 08:20:46.343945  108626 kubeadm.go:319] 	--control-plane 
	I1123 08:20:46.343951  108626 kubeadm.go:319] 
	I1123 08:20:46.344077  108626 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 08:20:46.344099  108626 kubeadm.go:319] 
	I1123 08:20:46.344205  108626 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dg55x4.9vphqzd2ayx2cukh \
	I1123 08:20:46.344343  108626 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 
	I1123 08:20:46.344362  108626 cni.go:84] Creating CNI manager for ""
	I1123 08:20:46.344372  108626 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:20:46.345767  108626 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 08:20:46.346885  108626 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 08:20:46.351238  108626 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 08:20:46.351254  108626 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 08:20:46.363723  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 08:20:46.561033  108626 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 08:20:46.561129  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:46.561135  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-450053 minikube.k8s.io/updated_at=2025_11_23T08_20_46_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=addons-450053 minikube.k8s.io/primary=true
	I1123 08:20:46.639906  108626 ops.go:34] apiserver oom_adj: -16
	I1123 08:20:46.640040  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:47.140434  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:47.640998  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:48.140695  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:48.640265  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:49.140768  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:49.640721  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:50.140928  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:50.640158  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:51.141010  108626 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 08:20:51.207056  108626 kubeadm.go:1114] duration metric: took 4.646004937s to wait for elevateKubeSystemPrivileges
	I1123 08:20:51.207099  108626 kubeadm.go:403] duration metric: took 16.038718258s to StartCluster
	I1123 08:20:51.207122  108626 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:51.207249  108626 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:20:51.207603  108626 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 08:20:51.207828  108626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 08:20:51.207860  108626 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 08:20:51.207934  108626 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1123 08:20:51.208095  108626 addons.go:70] Setting yakd=true in profile "addons-450053"
	I1123 08:20:51.208130  108626 addons.go:239] Setting addon yakd=true in "addons-450053"
	I1123 08:20:51.208139  108626 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:20:51.208164  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208165  108626 addons.go:70] Setting inspektor-gadget=true in profile "addons-450053"
	I1123 08:20:51.208196  108626 addons.go:70] Setting default-storageclass=true in profile "addons-450053"
	I1123 08:20:51.208204  108626 addons.go:239] Setting addon inspektor-gadget=true in "addons-450053"
	I1123 08:20:51.208217  108626 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-450053"
	I1123 08:20:51.208229  108626 addons.go:70] Setting registry-creds=true in profile "addons-450053"
	I1123 08:20:51.208243  108626 addons.go:70] Setting gcp-auth=true in profile "addons-450053"
	I1123 08:20:51.208242  108626 addons.go:70] Setting cloud-spanner=true in profile "addons-450053"
	I1123 08:20:51.208253  108626 addons.go:239] Setting addon registry-creds=true in "addons-450053"
	I1123 08:20:51.208262  108626 addons.go:239] Setting addon cloud-spanner=true in "addons-450053"
	I1123 08:20:51.208271  108626 addons.go:70] Setting ingress=true in profile "addons-450053"
	I1123 08:20:51.208279  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208288  108626 addons.go:70] Setting ingress-dns=true in profile "addons-450053"
	I1123 08:20:51.208293  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208301  108626 addons.go:239] Setting addon ingress-dns=true in "addons-450053"
	I1123 08:20:51.208328  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208345  108626 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-450053"
	I1123 08:20:51.208375  108626 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-450053"
	I1123 08:20:51.208401  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.208580  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208709  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208742  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208752  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208777  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208818  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208928  108626 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-450053"
	I1123 08:20:51.208956  108626 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-450053"
	I1123 08:20:51.209023  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.209687  108626 addons.go:70] Setting storage-provisioner=true in profile "addons-450053"
	I1123 08:20:51.209713  108626 addons.go:239] Setting addon storage-provisioner=true in "addons-450053"
	I1123 08:20:51.209737  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.209909  108626 addons.go:70] Setting metrics-server=true in profile "addons-450053"
	I1123 08:20:51.209926  108626 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-450053"
	I1123 08:20:51.209934  108626 addons.go:239] Setting addon metrics-server=true in "addons-450053"
	I1123 08:20:51.209941  108626 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-450053"
	I1123 08:20:51.210173  108626 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-450053"
	I1123 08:20:51.210204  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.210222  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208281  108626 addons.go:239] Setting addon ingress=true in "addons-450053"
	I1123 08:20:51.210298  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.210696  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.210732  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.210916  108626 out.go:179] * Verifying Kubernetes components...
	I1123 08:20:51.211016  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.211093  108626 addons.go:70] Setting volcano=true in profile "addons-450053"
	I1123 08:20:51.211105  108626 addons.go:239] Setting addon volcano=true in "addons-450053"
	I1123 08:20:51.211129  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.211534  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208263  108626 mustload.go:66] Loading cluster: addons-450053
	I1123 08:20:51.209914  108626 addons.go:70] Setting volumesnapshots=true in profile "addons-450053"
	I1123 08:20:51.212175  108626 addons.go:239] Setting addon volumesnapshots=true in "addons-450053"
	I1123 08:20:51.212248  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.210223  108626 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-450053"
	I1123 08:20:51.212714  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.213155  108626 addons.go:70] Setting registry=true in profile "addons-450053"
	I1123 08:20:51.213173  108626 addons.go:239] Setting addon registry=true in "addons-450053"
	I1123 08:20:51.213200  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.213213  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.208234  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.213959  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.214701  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.214824  108626 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:20:51.215098  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.215945  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.217098  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.217828  108626 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 08:20:51.269524  108626 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1123 08:20:51.272124  108626 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:20:51.272145  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1123 08:20:51.272214  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.272373  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1123 08:20:51.273574  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1123 08:20:51.273592  108626 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1123 08:20:51.273686  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.276474  108626 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1123 08:20:51.278768  108626 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:20:51.278787  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1123 08:20:51.278841  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.279287  108626 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1123 08:20:51.281577  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1123 08:20:51.281610  108626 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1123 08:20:51.281724  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.293041  108626 addons.go:239] Setting addon default-storageclass=true in "addons-450053"
	I1123 08:20:51.302397  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.294081  108626 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 08:20:51.301334  108626 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-450053"
	I1123 08:20:51.304995  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.305236  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.305478  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:51.305655  108626 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1123 08:20:51.305709  108626 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1123 08:20:51.306151  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:20:51.306899  108626 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1123 08:20:51.306958  108626 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:20:51.307245  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1123 08:20:51.307318  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.306997  108626 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1123 08:20:51.307565  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:51.307597  108626 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:20:51.307608  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1123 08:20:51.307664  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.307008  108626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:20:51.307736  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 08:20:51.307887  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.308519  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1123 08:20:51.308539  108626 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1123 08:20:51.308548  108626 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:20:51.308561  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1123 08:20:51.308584  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.308620  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.309439  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1123 08:20:51.311413  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:20:51.314223  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1123 08:20:51.314953  108626 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:20:51.314986  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1123 08:20:51.315050  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.315219  108626 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	W1123 08:20:51.315541  108626 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1123 08:20:51.317636  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1123 08:20:51.318867  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1123 08:20:51.319904  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1123 08:20:51.322130  108626 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1123 08:20:51.322148  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1123 08:20:51.322232  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.327230  108626 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1123 08:20:51.327349  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1123 08:20:51.328419  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1123 08:20:51.328537  108626 out.go:179]   - Using image docker.io/registry:3.0.0
	I1123 08:20:51.329775  108626 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1123 08:20:51.329796  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1123 08:20:51.329863  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.331465  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1123 08:20:51.332826  108626 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1123 08:20:51.336858  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1123 08:20:51.336882  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1123 08:20:51.337037  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.338227  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.367865  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.371497  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.374206  108626 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1123 08:20:51.374824  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.377744  108626 out.go:179]   - Using image docker.io/busybox:stable
	I1123 08:20:51.378085  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.379186  108626 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:20:51.379210  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1123 08:20:51.379273  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.380898  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.386582  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.392884  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.408295  108626 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 08:20:51.408374  108626 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 08:20:51.408584  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:51.409982  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.410822  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.413545  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.418029  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.424005  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.425022  108626 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	W1123 08:20:51.425395  108626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:20:51.425442  108626 retry.go:31] will retry after 195.827486ms: ssh: handshake failed: EOF
	W1123 08:20:51.425675  108626 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1123 08:20:51.425698  108626 retry.go:31] will retry after 308.481241ms: ssh: handshake failed: EOF
	I1123 08:20:51.434190  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.440322  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:51.442403  108626 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 08:20:51.530897  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1123 08:20:51.530923  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1123 08:20:51.538949  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1123 08:20:51.562839  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1123 08:20:51.562868  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1123 08:20:51.571518  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1123 08:20:51.571543  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1123 08:20:51.584018  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1123 08:20:51.585705  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1123 08:20:51.585725  108626 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1123 08:20:51.586092  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1123 08:20:51.586804  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1123 08:20:51.588129  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1123 08:20:51.588149  108626 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1123 08:20:51.590392  108626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1123 08:20:51.590411  108626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1123 08:20:51.611791  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 08:20:51.612556  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1123 08:20:51.612581  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1123 08:20:51.615953  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1123 08:20:51.615985  108626 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1123 08:20:51.623163  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1123 08:20:51.627380  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 08:20:51.629336  108626 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:20:51.629359  108626 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1123 08:20:51.639198  108626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1123 08:20:51.639230  108626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1123 08:20:51.647719  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1123 08:20:51.647748  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1123 08:20:51.674745  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1123 08:20:51.677436  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1123 08:20:51.677479  108626 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1123 08:20:51.699119  108626 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1123 08:20:51.699148  108626 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1123 08:20:51.702349  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1123 08:20:51.702377  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1123 08:20:51.735063  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1123 08:20:51.735100  108626 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1123 08:20:51.737120  108626 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:20:51.737145  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1123 08:20:51.755585  108626 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1123 08:20:51.755616  108626 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1123 08:20:51.776157  108626 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1123 08:20:51.776841  108626 node_ready.go:35] waiting up to 6m0s for node "addons-450053" to be "Ready" ...
	I1123 08:20:51.788103  108626 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:20:51.788133  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1123 08:20:51.817895  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1123 08:20:51.817930  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1123 08:20:51.821383  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1123 08:20:51.853502  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:20:51.879451  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1123 08:20:51.879485  108626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1123 08:20:51.880338  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1123 08:20:51.922437  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1123 08:20:51.922544  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1123 08:20:51.937148  108626 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1123 08:20:51.937178  108626 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1123 08:20:51.974532  108626 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:20:51.974653  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1123 08:20:51.984304  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1123 08:20:51.984382  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1123 08:20:52.029953  108626 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:20:52.029996  108626 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1123 08:20:52.031584  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1123 08:20:52.063910  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1123 08:20:52.285689  108626 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-450053" context rescaled to 1 replicas
	I1123 08:20:52.559317  108626 addons.go:495] Verifying addon metrics-server=true in "addons-450053"
	W1123 08:20:52.574438  108626 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1123 08:20:52.599304  108626 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-450053 service yakd-dashboard -n yakd-dashboard
	
	I1123 08:20:53.178505  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.324943989s)
	W1123 08:20:53.178563  108626 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:20:53.178589  108626 retry.go:31] will retry after 267.656376ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1123 08:20:53.178652  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.298282875s)
	I1123 08:20:53.178693  108626 addons.go:495] Verifying addon ingress=true in "addons-450053"
	I1123 08:20:53.178721  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.147108969s)
	I1123 08:20:53.178740  108626 addons.go:495] Verifying addon registry=true in "addons-450053"
	I1123 08:20:53.178912  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.114961309s)
	I1123 08:20:53.178933  108626 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-450053"
	I1123 08:20:53.180236  108626 out.go:179] * Verifying ingress addon...
	I1123 08:20:53.180272  108626 out.go:179] * Verifying registry addon...
	I1123 08:20:53.181221  108626 out.go:179] * Verifying csi-hostpath-driver addon...
	I1123 08:20:53.182826  108626 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1123 08:20:53.182826  108626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1123 08:20:53.183919  108626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1123 08:20:53.185497  108626 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1123 08:20:53.185518  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:53.186512  108626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:20:53.186527  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:53.186576  108626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:20:53.186591  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:53.446879  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1123 08:20:53.688738  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:53.688865  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:53.689088  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 08:20:53.779491  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:20:54.186829  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:54.186829  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:54.187039  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:54.686354  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:54.686365  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:54.686370  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:55.186335  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:55.186502  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:55.186532  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:55.686439  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:55.686439  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:55.686615  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 08:20:55.779639  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:20:55.929776  108626 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.482848682s)
	I1123 08:20:56.187067  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:56.187068  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:56.187068  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:56.686083  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:56.686083  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:56.686290  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:57.186674  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:57.186731  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:57.186810  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:57.686240  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:57.686280  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:57.686331  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 08:20:57.780038  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:20:58.186107  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:58.186173  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:58.186183  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:58.686994  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:58.687028  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:58.687138  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:58.918220  108626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1123 08:20:58.918290  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:58.936101  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:59.051339  108626 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1123 08:20:59.063594  108626 addons.go:239] Setting addon gcp-auth=true in "addons-450053"
	I1123 08:20:59.063656  108626 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:20:59.064090  108626 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:20:59.081736  108626 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1123 08:20:59.081787  108626 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:20:59.099192  108626 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:20:59.186667  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:20:59.186746  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:59.186877  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:59.197427  108626 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1123 08:20:59.198770  108626 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1123 08:20:59.199825  108626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1123 08:20:59.199844  108626 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1123 08:20:59.212485  108626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1123 08:20:59.212507  108626 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1123 08:20:59.224833  108626 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:20:59.224856  108626 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1123 08:20:59.237447  108626 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1123 08:20:59.526081  108626 addons.go:495] Verifying addon gcp-auth=true in "addons-450053"
	I1123 08:20:59.527285  108626 out.go:179] * Verifying gcp-auth addon...
	I1123 08:20:59.529056  108626 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1123 08:20:59.531278  108626 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1123 08:20:59.531299  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:20:59.685937  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:20:59.686027  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:20:59.686343  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:00.031655  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:00.186548  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:00.186639  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:00.186680  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1123 08:21:00.280042  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:21:00.532792  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:00.686951  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:00.686961  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:00.687000  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:01.032171  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:01.186044  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:01.186089  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:01.186250  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:01.532177  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:01.685753  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:01.685986  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:01.686277  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:02.032572  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:02.186138  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:02.186189  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:02.186468  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:02.532429  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:02.686431  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:02.686442  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:02.686640  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W1123 08:21:02.780734  108626 node_ready.go:57] node "addons-450053" has "Ready":"False" status (will retry)
	I1123 08:21:03.031859  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:03.191048  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:03.192008  108626 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1123 08:21:03.192085  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:03.192054  108626 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1123 08:21:03.192148  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:03.279923  108626 node_ready.go:49] node "addons-450053" is "Ready"
	I1123 08:21:03.279953  108626 node_ready.go:38] duration metric: took 11.503075337s for node "addons-450053" to be "Ready" ...
	I1123 08:21:03.279981  108626 api_server.go:52] waiting for apiserver process to appear ...
	I1123 08:21:03.280037  108626 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:21:03.295295  108626 api_server.go:72] duration metric: took 12.087400322s to wait for apiserver process to appear ...
	I1123 08:21:03.295331  108626 api_server.go:88] waiting for apiserver healthz status ...
	I1123 08:21:03.295359  108626 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1123 08:21:03.299779  108626 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1123 08:21:03.300646  108626 api_server.go:141] control plane version: v1.34.1
	I1123 08:21:03.300670  108626 api_server.go:131] duration metric: took 5.331287ms to wait for apiserver health ...
	I1123 08:21:03.300679  108626 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 08:21:03.303614  108626 system_pods.go:59] 20 kube-system pods found
	I1123 08:21:03.303642  108626 system_pods.go:61] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending
	I1123 08:21:03.303653  108626 system_pods.go:61] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.303659  108626 system_pods.go:61] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.303667  108626 system_pods.go:61] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.303674  108626 system_pods.go:61] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending
	I1123 08:21:03.303678  108626 system_pods.go:61] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.303688  108626 system_pods.go:61] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.303695  108626 system_pods.go:61] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.303698  108626 system_pods.go:61] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.303704  108626 system_pods.go:61] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.303708  108626 system_pods.go:61] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.303712  108626 system_pods.go:61] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.303716  108626 system_pods.go:61] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.303722  108626 system_pods.go:61] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.303729  108626 system_pods.go:61] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.303733  108626 system_pods.go:61] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending
	I1123 08:21:03.303741  108626 system_pods.go:61] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending
	I1123 08:21:03.303747  108626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.303753  108626 system_pods.go:61] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending
	I1123 08:21:03.303758  108626 system_pods.go:61] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.303765  108626 system_pods.go:74] duration metric: took 3.079683ms to wait for pod list to return data ...
	I1123 08:21:03.303774  108626 default_sa.go:34] waiting for default service account to be created ...
	I1123 08:21:03.305931  108626 default_sa.go:45] found service account: "default"
	I1123 08:21:03.305955  108626 default_sa.go:55] duration metric: took 2.174036ms for default service account to be created ...
	I1123 08:21:03.305985  108626 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 08:21:03.309230  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:03.309258  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending
	I1123 08:21:03.309271  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.309280  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.309295  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.309301  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending
	I1123 08:21:03.309309  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.309315  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.309323  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.309327  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.309338  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.309347  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.309356  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.309367  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.309378  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.309389  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.309400  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:03.309410  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending
	I1123 08:21:03.309418  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.309423  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending
	I1123 08:21:03.309435  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.309456  108626 retry.go:31] will retry after 298.833993ms: missing components: kube-dns
	I1123 08:21:03.537761  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:03.639866  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:03.639912  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:03.639923  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.639934  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.639943  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.640007  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:03.640017  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.640023  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.640029  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.640035  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.640043  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.640056  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.640063  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.640073  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.640083  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.640096  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.640106  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:03.640114  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:03.640125  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.640136  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.640148  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.640177  108626 retry.go:31] will retry after 348.707573ms: missing components: kube-dns
	I1123 08:21:03.738886  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:03.739099  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:03.739151  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:03.994128  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:03.994162  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:03.994170  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:03.994179  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:03.994186  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:03.994194  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:03.994202  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:03.994209  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:03.994218  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:03.994223  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:03.994234  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:03.994240  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:03.994249  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:03.994261  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:03.994270  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:03.994279  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:03.994292  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:03.994300  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:03.994308  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.994317  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:03.994325  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:03.994348  108626 retry.go:31] will retry after 358.645575ms: missing components: kube-dns
	I1123 08:21:04.032063  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:04.185957  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:04.186512  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:04.186813  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:04.358219  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:04.358256  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:04.358266  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 08:21:04.358280  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:04.358288  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:04.358297  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:04.358304  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:04.358310  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:04.358316  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:04.358349  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:04.358365  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:04.358370  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:04.358377  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:04.358388  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:04.358397  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:04.358407  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:04.358415  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:04.358423  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:04.358434  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.358445  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.358454  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 08:21:04.358473  108626 retry.go:31] will retry after 497.770376ms: missing components: kube-dns
	I1123 08:21:04.532527  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:04.687112  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:04.687212  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:04.687224  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:04.861161  108626 system_pods.go:86] 20 kube-system pods found
	I1123 08:21:04.861198  108626 system_pods.go:89] "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1123 08:21:04.861210  108626 system_pods.go:89] "coredns-66bc5c9577-n2ksh" [1fe3dca6-6b07-4de2-83e3-29ea85694c99] Running
	I1123 08:21:04.861220  108626 system_pods.go:89] "csi-hostpath-attacher-0" [02d82dd0-2aba-4204-b5b2-fc371db85e0e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1123 08:21:04.861225  108626 system_pods.go:89] "csi-hostpath-resizer-0" [bf10fe87-1932-4c75-a8ea-9b08d219357b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1123 08:21:04.861231  108626 system_pods.go:89] "csi-hostpathplugin-kgwc9" [d8646ded-167a-444d-bccf-1ad472465376] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1123 08:21:04.861234  108626 system_pods.go:89] "etcd-addons-450053" [c141c9f6-d76e-4275-b24b-c96e3b1ba0df] Running
	I1123 08:21:04.861238  108626 system_pods.go:89] "kindnet-w25rx" [df5a9205-65f6-473b-9aaf-e2b5f0594c9c] Running
	I1123 08:21:04.861242  108626 system_pods.go:89] "kube-apiserver-addons-450053" [beed6881-dac2-4e3a-a0e5-30d253cdff32] Running
	I1123 08:21:04.861246  108626 system_pods.go:89] "kube-controller-manager-addons-450053" [f2cc52ce-540e-4930-a13b-ec022573988d] Running
	I1123 08:21:04.861251  108626 system_pods.go:89] "kube-ingress-dns-minikube" [c96ec810-070a-45be-b95d-a0efab2d29b1] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1123 08:21:04.861254  108626 system_pods.go:89] "kube-proxy-mvm7j" [82b7e31a-fe86-48f3-aaf9-804bae8294a8] Running
	I1123 08:21:04.861258  108626 system_pods.go:89] "kube-scheduler-addons-450053" [de6dfc85-172b-4196-81af-693846b1d79b] Running
	I1123 08:21:04.861263  108626 system_pods.go:89] "metrics-server-85b7d694d7-74pfv" [45d13fcb-95ca-476d-b5f6-96b8120fe8e4] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1123 08:21:04.861268  108626 system_pods.go:89] "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1123 08:21:04.861276  108626 system_pods.go:89] "registry-6b586f9694-48d75" [cc2a224a-be19-4f84-8699-fcb2e9fc4c59] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1123 08:21:04.861281  108626 system_pods.go:89] "registry-creds-764b6fb674-gvfgs" [b765d6ad-2418-44c7-9da3-fb58dc143860] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1123 08:21:04.861286  108626 system_pods.go:89] "registry-proxy-l5z45" [0dc46992-7951-4eae-8ad8-1e175ba138cb] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1123 08:21:04.861290  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-c52wh" [812cadfd-e0af-4e91-a85d-f0bf11412d6c] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.861299  108626 system_pods.go:89] "snapshot-controller-7d9fbc56b8-jgxr4" [fc46ed37-adfa-4ddd-b87f-d44f1f55d872] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1123 08:21:04.861304  108626 system_pods.go:89] "storage-provisioner" [5640be3b-31a4-4ece-9add-676a90ef0dfd] Running
	I1123 08:21:04.861312  108626 system_pods.go:126] duration metric: took 1.555320579s to wait for k8s-apps to be running ...
	I1123 08:21:04.861321  108626 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 08:21:04.861367  108626 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:21:04.874368  108626 system_svc.go:56] duration metric: took 13.038091ms WaitForService to wait for kubelet
	I1123 08:21:04.874396  108626 kubeadm.go:587] duration metric: took 13.666506996s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 08:21:04.874421  108626 node_conditions.go:102] verifying NodePressure condition ...
	I1123 08:21:04.876958  108626 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 08:21:04.877006  108626 node_conditions.go:123] node cpu capacity is 8
	I1123 08:21:04.877027  108626 node_conditions.go:105] duration metric: took 2.600094ms to run NodePressure ...
	I1123 08:21:04.877045  108626 start.go:242] waiting for startup goroutines ...
	I1123 08:21:05.032883  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:05.185716  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:05.185764  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:05.186074  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:05.531683  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:05.687215  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:05.687366  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:05.687478  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:06.032995  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:06.187109  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:06.189265  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:06.189319  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:06.532145  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:06.688318  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:06.688321  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:06.688433  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:07.032977  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:07.186080  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:07.186127  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:07.186374  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:07.532681  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:07.687227  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:07.687481  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:07.687601  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:08.032609  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:08.186798  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:08.186855  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:08.186925  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:08.532006  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:08.687050  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:08.687193  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:08.687241  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:09.032750  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:09.186799  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:09.187265  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:09.187286  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:09.531612  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:09.686770  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:09.686918  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:09.687047  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:10.033257  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:10.186368  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:10.186475  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:10.186749  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:10.532781  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:10.686764  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:10.686816  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:10.686899  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:11.033168  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:11.186077  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:11.186262  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:11.186435  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:11.532432  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:11.686645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:11.686790  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:11.687347  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:12.032729  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:12.187006  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:12.187075  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:12.187277  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:12.532177  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:12.686353  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:12.686420  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:12.686993  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:13.032102  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:13.187755  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:13.188876  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:13.189034  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:13.532447  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:13.687038  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:13.688822  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:13.688915  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:14.032435  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:14.187067  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:14.187211  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:14.187244  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:14.532733  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:14.687661  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:14.687758  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:14.687794  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:15.032877  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:15.186961  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:15.187001  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:15.186991  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:15.532544  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:15.686645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:15.686642  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:15.686711  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:16.032483  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:16.187017  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:16.187043  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:16.187258  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:16.532689  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:16.687352  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:16.688158  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:16.688324  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:17.032414  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:17.186309  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:17.186480  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:17.186845  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:17.532876  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:17.685580  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:17.685633  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:17.686527  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:18.032682  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:18.187091  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:18.187139  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:18.187156  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:18.532187  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:18.685677  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:18.686181  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:18.686522  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:19.032194  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:19.186562  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:19.186588  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:19.186638  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:19.532213  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:19.686371  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:19.686898  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:19.687122  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:20.032751  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:20.186615  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:20.186666  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:20.186806  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:20.532366  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:20.686306  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:20.686499  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:20.686878  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:21.032533  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:21.187094  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:21.187437  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:21.187717  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:21.532111  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:21.686066  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:21.686146  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:21.686622  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:22.033013  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:22.185695  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:22.186321  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:22.186479  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:22.532149  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:22.686152  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:22.686311  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:22.686736  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:23.033360  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:23.186507  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:23.186744  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:23.186932  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:23.533033  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:23.685765  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:23.686020  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:23.686213  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:24.032257  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:24.186468  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:24.186695  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:24.186879  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:24.533132  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:24.709211  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:24.709363  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:24.709516  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:25.033404  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:25.186624  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:25.186762  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:25.186800  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:25.532472  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:25.686236  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:25.686279  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:25.686645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:26.032537  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:26.186385  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:26.186565  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:26.186661  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:26.533355  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:26.686701  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:26.686858  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:26.687060  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:27.032664  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:27.187055  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:27.187235  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:27.187269  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:27.535768  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:27.686841  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:27.686904  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:27.687101  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:28.031851  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:28.185727  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:28.185755  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:28.186209  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:28.532219  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:28.686012  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:28.686035  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:28.686653  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:29.031419  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:29.186356  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:29.186480  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:29.186630  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:29.533167  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:29.686142  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:29.686228  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:29.686411  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:30.034025  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:30.186368  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:30.186475  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:30.186640  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:30.532887  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:30.685912  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:30.685912  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:30.686333  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:31.033381  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:31.186619  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:31.186658  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:31.187316  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:31.532645  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:31.687081  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:31.687200  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:31.687219  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:32.032986  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:32.185832  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:32.185927  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:32.186270  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:32.531689  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:32.686961  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:32.687031  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:32.687101  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:33.032520  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:33.186988  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:33.187070  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:33.187085  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:33.532284  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:33.686240  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:33.686331  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:33.686551  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:34.032832  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:34.186029  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:34.186292  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:34.186375  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:34.531993  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:34.687493  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:34.687694  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:34.687893  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:35.033472  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:35.187141  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:35.187164  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:35.187423  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:35.532471  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:35.687041  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1123 08:21:35.687114  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:35.687252  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:36.033233  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:36.186023  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:36.186050  108626 kapi.go:107] duration metric: took 43.003222323s to wait for kubernetes.io/minikube-addons=registry ...
	I1123 08:21:36.186567  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:36.532688  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:36.688015  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:36.688157  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:37.032735  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:37.187324  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:37.187678  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:37.532142  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:37.686203  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:37.686868  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:38.032941  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:38.185829  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:38.186330  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:38.532846  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:38.686295  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:38.686613  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:39.033091  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:39.187175  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:39.189745  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:39.532917  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:39.688292  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:39.688873  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:40.032762  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:40.187058  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:40.187075  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:40.533003  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:40.686299  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:40.686711  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:41.032603  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:41.186769  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:41.186784  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:41.533164  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:41.685960  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:41.686779  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:42.032622  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:42.186913  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:42.186913  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:42.532830  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:42.686850  108626 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1123 08:21:42.686959  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:43.031906  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:43.185633  108626 kapi.go:107] duration metric: took 50.002801925s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1123 08:21:43.186497  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:43.532309  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:43.687208  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:44.032749  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:44.188324  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:44.532303  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:44.688062  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:45.032802  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:45.188277  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:45.532144  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1123 08:21:45.687185  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:46.032715  108626 kapi.go:107] duration metric: took 46.503656157s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1123 08:21:46.035428  108626 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-450053 cluster.
	I1123 08:21:46.036871  108626 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1123 08:21:46.038206  108626 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1123 08:21:46.187894  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:46.687363  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:47.187141  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:47.688043  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:48.187770  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:48.687340  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:49.187774  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:49.687608  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:50.187470  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:50.687318  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:51.188386  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:51.687618  108626 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1123 08:21:52.187433  108626 kapi.go:107] duration metric: took 59.003509486s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1123 08:21:52.265809  108626 out.go:179] * Enabled addons: inspektor-gadget, registry-creds, nvidia-device-plugin, cloud-spanner, ingress-dns, amd-gpu-device-plugin, storage-provisioner, metrics-server, storage-provisioner-rancher, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1123 08:21:52.267943  108626 addons.go:530] duration metric: took 1m1.059997692s for enable addons: enabled=[inspektor-gadget registry-creds nvidia-device-plugin cloud-spanner ingress-dns amd-gpu-device-plugin storage-provisioner metrics-server storage-provisioner-rancher yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1123 08:21:52.268019  108626 start.go:247] waiting for cluster config update ...
	I1123 08:21:52.268051  108626 start.go:256] writing updated cluster config ...
	I1123 08:21:52.268390  108626 ssh_runner.go:195] Run: rm -f paused
	I1123 08:21:52.272720  108626 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:21:52.276036  108626 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-n2ksh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.280050  108626 pod_ready.go:94] pod "coredns-66bc5c9577-n2ksh" is "Ready"
	I1123 08:21:52.280075  108626 pod_ready.go:86] duration metric: took 4.013698ms for pod "coredns-66bc5c9577-n2ksh" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.281948  108626 pod_ready.go:83] waiting for pod "etcd-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.285514  108626 pod_ready.go:94] pod "etcd-addons-450053" is "Ready"
	I1123 08:21:52.285534  108626 pod_ready.go:86] duration metric: took 3.553039ms for pod "etcd-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.287419  108626 pod_ready.go:83] waiting for pod "kube-apiserver-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.290730  108626 pod_ready.go:94] pod "kube-apiserver-addons-450053" is "Ready"
	I1123 08:21:52.290751  108626 pod_ready.go:86] duration metric: took 3.312968ms for pod "kube-apiserver-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.292403  108626 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.676138  108626 pod_ready.go:94] pod "kube-controller-manager-addons-450053" is "Ready"
	I1123 08:21:52.676171  108626 pod_ready.go:86] duration metric: took 383.745849ms for pod "kube-controller-manager-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:52.877186  108626 pod_ready.go:83] waiting for pod "kube-proxy-mvm7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.276222  108626 pod_ready.go:94] pod "kube-proxy-mvm7j" is "Ready"
	I1123 08:21:53.276255  108626 pod_ready.go:86] duration metric: took 399.044828ms for pod "kube-proxy-mvm7j" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.476786  108626 pod_ready.go:83] waiting for pod "kube-scheduler-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.876764  108626 pod_ready.go:94] pod "kube-scheduler-addons-450053" is "Ready"
	I1123 08:21:53.876791  108626 pod_ready.go:86] duration metric: took 399.975684ms for pod "kube-scheduler-addons-450053" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 08:21:53.876802  108626 pod_ready.go:40] duration metric: took 1.604052431s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 08:21:53.922248  108626 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 08:21:53.923728  108626 out.go:179] * Done! kubectl is now configured to use "addons-450053" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 08:21:54 addons-450053 crio[772]: time="2025-11-23T08:21:54.788701582Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b9812e75-a4e4-48ed-814f-70f2bbccbcb1 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:21:54 addons-450053 crio[772]: time="2025-11-23T08:21:54.790343135Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.066534049Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b9812e75-a4e4-48ed-814f-70f2bbccbcb1 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.067191077Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9f0e3c4b-2f9d-4bd5-a0cb-a51effe59237 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.068600427Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ec734f15-a1bb-4af5-9760-9e2e7945eaa4 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.073942019Z" level=info msg="Creating container: default/busybox/busybox" id=1a9ec652-f763-4d59-b250-50ecabddd3c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.07410365Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.079493131Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.079946464Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.105674779Z" level=info msg="Created container ec30b116900af0b976e23df2c6f535162805d1a3c55f4d071e354f2c73052de5: default/busybox/busybox" id=1a9ec652-f763-4d59-b250-50ecabddd3c9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.106337105Z" level=info msg="Starting container: ec30b116900af0b976e23df2c6f535162805d1a3c55f4d071e354f2c73052de5" id=7cd19a32-bcbe-4f64-96f1-b171d032246b name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:21:57 addons-450053 crio[772]: time="2025-11-23T08:21:57.108087257Z" level=info msg="Started container" PID=6234 containerID=ec30b116900af0b976e23df2c6f535162805d1a3c55f4d071e354f2c73052de5 description=default/busybox/busybox id=7cd19a32-bcbe-4f64-96f1-b171d032246b name=/runtime.v1.RuntimeService/StartContainer sandboxID=f664c97653118d2bcc751b9fc948f65c356aa2d7d2b0519b532b5e2c204853d2
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.613963547Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240/POD" id=123bfcf1-c55c-4399-b31e-cf3af5c40156 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.614080822Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.620733519Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240 Namespace:local-path-storage ID:e1e036bdfc0aadc8da2728511f4afd9a47e83fc4bfdc17060901cb7440ff6199 UID:1630b01f-3d3f-4e8a-8a8f-33e7461656ff NetNS:/var/run/netns/23148536-d1e9-48e1-8852-f65bd869b2cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a6568}] Aliases:map[]}"
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.620777619Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240 to CNI network \"kindnet\" (type=ptp)"
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.634064072Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240 Namespace:local-path-storage ID:e1e036bdfc0aadc8da2728511f4afd9a47e83fc4bfdc17060901cb7440ff6199 UID:1630b01f-3d3f-4e8a-8a8f-33e7461656ff NetNS:/var/run/netns/23148536-d1e9-48e1-8852-f65bd869b2cb Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005a6568}] Aliases:map[]}"
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.634270531Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240 for CNI network kindnet (type=ptp)"
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.635612189Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.636844285Z" level=info msg="Ran pod sandbox e1e036bdfc0aadc8da2728511f4afd9a47e83fc4bfdc17060901cb7440ff6199 with infra container: local-path-storage/helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240/POD" id=123bfcf1-c55c-4399-b31e-cf3af5c40156 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.638388484Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=b484f8f4-4b79-465d-a15a-d3888943385a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.63856287Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=b484f8f4-4b79-465d-a15a-d3888943385a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.638607258Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=b484f8f4-4b79-465d-a15a-d3888943385a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.639270374Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=fd82733e-492f-4df8-bfb9-77cc9c776303 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:22:05 addons-450053 crio[772]: time="2025-11-23T08:22:05.647452034Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	ec30b116900af       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          9 seconds ago        Running             busybox                                  0                   f664c97653118       busybox                                    default
	738d8d379f251       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          15 seconds ago       Running             csi-snapshotter                          0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	d984a1356e5ec       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          16 seconds ago       Running             csi-provisioner                          0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	f1bd36bf8d3aa       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 seconds ago       Running             liveness-probe                           0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	e39671b629175       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           18 seconds ago       Running             hostpath                                 0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	524005afa9256       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                19 seconds ago       Running             node-driver-registrar                    0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	b833e2e0c14c4       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 21 seconds ago       Running             gcp-auth                                 0                   54652698e9b98       gcp-auth-78565c9fb4-mxx49                  gcp-auth
	0d734b50d5d0a       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             24 seconds ago       Running             controller                               0                   4e7ba512f1fa6       ingress-nginx-controller-6c8bf45fb-k5xk4   ingress-nginx
	9a58810f94994       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            27 seconds ago       Running             gadget                                   0                   f80be77ba9e98       gadget-mblm5                               gadget
	9989944eaa26f       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              30 seconds ago       Running             registry-proxy                           0                   e8cbaccad63a6       registry-proxy-l5z45                       kube-system
	e3688d5b85c22       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     33 seconds ago       Running             amd-gpu-device-plugin                    0                   5e2f3a85cf238       amd-gpu-device-plugin-625vc                kube-system
	8ecc013e239af       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     34 seconds ago       Running             nvidia-device-plugin-ctr                 0                   77bf89844ed51       nvidia-device-plugin-daemonset-hpnrm       kube-system
	ef2b7b50281e2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   39 seconds ago       Exited              patch                                    0                   1c5b9002ede3c       gcp-auth-certs-patch-qd4wc                 gcp-auth
	1dfc56fc8d94b       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           39 seconds ago       Running             registry                                 0                   ace3557fae304       registry-6b586f9694-48d75                  kube-system
	227f1cba9bc38       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   41 seconds ago       Running             csi-external-health-monitor-controller   0                   9dd772da7e394       csi-hostpathplugin-kgwc9                   kube-system
	878966c2c1dd7       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              42 seconds ago       Running             csi-resizer                              0                   ef69f18ca58fd       csi-hostpath-resizer-0                     kube-system
	43699df157c91       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               43 seconds ago       Running             cloud-spanner-emulator                   0                   5a15d90ad8077       cloud-spanner-emulator-5bdddb765-4vxx9     default
	2f1ccac12bdf0       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   47 seconds ago       Exited              create                                   0                   b5b520188a0ef       gcp-auth-certs-create-tkspn                gcp-auth
	819c675278b23       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   47 seconds ago       Exited              patch                                    0                   4d4b5f1b9fbd9       ingress-nginx-admission-patch-5fm9n        ingress-nginx
	f9cd2adc0709d       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      47 seconds ago       Running             volume-snapshot-controller               0                   a4d2d6fec0f56       snapshot-controller-7d9fbc56b8-jgxr4       kube-system
	a6ff371d12340       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      48 seconds ago       Running             volume-snapshot-controller               0                   ae36487e3d6b6       snapshot-controller-7d9fbc56b8-c52wh       kube-system
	492b96400c0c2       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   49 seconds ago       Exited              create                                   0                   de07514140f90       ingress-nginx-admission-create-ll2hd       ingress-nginx
	c414e963396a7       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              50 seconds ago       Running             yakd                                     0                   b8b58d174c021       yakd-dashboard-5ff678cb9-289rv             yakd-dashboard
	8364e195c165b       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             53 seconds ago       Running             csi-attacher                             0                   acaf80940476e       csi-hostpath-attacher-0                    kube-system
	d99d5e6604fb8       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             55 seconds ago       Running             local-path-provisioner                   0                   0a11f64d4448e       local-path-provisioner-648f6765c9-xkbws    local-path-storage
	bca140d99c87f       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               56 seconds ago       Running             minikube-ingress-dns                     0                   98d3c1d00a63d       kube-ingress-dns-minikube                  kube-system
	0e62c249e71fe       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   2ab31846fb9a1       metrics-server-85b7d694d7-74pfv            kube-system
	4ea39bfdb1b8e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   f5bb3744257d6       storage-provisioner                        kube-system
	fc8e0ddc56a4b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   f546e1227645b       coredns-66bc5c9577-n2ksh                   kube-system
	f5b7d2b9fc7fd       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   bf0f80add7e5e       kube-proxy-mvm7j                           kube-system
	204df826a5f7f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   3720309150b27       kindnet-w25rx                              kube-system
	2b03a5a989737       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   46716e45f5c05       kube-apiserver-addons-450053               kube-system
	5ce09f86a113c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   73ff089ec72a8       kube-scheduler-addons-450053               kube-system
	3d0e901e59417       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   e84a186999b88       kube-controller-manager-addons-450053      kube-system
	58c0dd74075cf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   13db6d055a8b9       etcd-addons-450053                         kube-system
	
	
	==> coredns [fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e] <==
	[INFO] 10.244.0.19:44674 - 22375 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145968s
	[INFO] 10.244.0.19:53317 - 38527 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000105516s
	[INFO] 10.244.0.19:53317 - 38347 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000165771s
	[INFO] 10.244.0.19:38217 - 53485 "AAAA IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000072798s
	[INFO] 10.244.0.19:38217 - 53166 "A IN registry.kube-system.svc.cluster.local.europe-west2-a.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,aa,rd,ra 206 0.000099604s
	[INFO] 10.244.0.19:41665 - 65337 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00010474s
	[INFO] 10.244.0.19:41665 - 65133 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000129571s
	[INFO] 10.244.0.19:35077 - 2926 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000071952s
	[INFO] 10.244.0.19:35077 - 2737 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000101175s
	[INFO] 10.244.0.19:50566 - 41841 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000145214s
	[INFO] 10.244.0.19:50566 - 41660 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000212613s
	[INFO] 10.244.0.22:54415 - 26600 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.0001886s
	[INFO] 10.244.0.22:55513 - 16159 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000264146s
	[INFO] 10.244.0.22:34345 - 23917 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000101179s
	[INFO] 10.244.0.22:52947 - 36067 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000113521s
	[INFO] 10.244.0.22:44298 - 27430 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109992s
	[INFO] 10.244.0.22:55824 - 50577 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000162677s
	[INFO] 10.244.0.22:51376 - 53762 "A IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.004809748s
	[INFO] 10.244.0.22:56322 - 35652 "AAAA IN storage.googleapis.com.europe-west2-a.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 190 0.005280922s
	[INFO] 10.244.0.22:35576 - 46405 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004460737s
	[INFO] 10.244.0.22:34495 - 30728 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004561791s
	[INFO] 10.244.0.22:53202 - 15769 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004608446s
	[INFO] 10.244.0.22:55965 - 16469 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004902531s
	[INFO] 10.244.0.22:32940 - 62051 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001179393s
	[INFO] 10.244.0.22:50614 - 42059 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 268 0.001248579s
	
	
	==> describe nodes <==
	Name:               addons-450053
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-450053
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=addons-450053
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_20_46_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-450053
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-450053"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:20:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-450053
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:21:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:21:47 +0000   Sun, 23 Nov 2025 08:20:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:21:47 +0000   Sun, 23 Nov 2025 08:20:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:21:47 +0000   Sun, 23 Nov 2025 08:20:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:21:47 +0000   Sun, 23 Nov 2025 08:21:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-450053
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                9cca0d89-df6b-42c0-91ac-94fbf27bab0b
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  default                     cloud-spanner-emulator-5bdddb765-4vxx9                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  gadget                      gadget-mblm5                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  gcp-auth                    gcp-auth-78565c9fb4-mxx49                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         67s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-k5xk4                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         74s
	  kube-system                 amd-gpu-device-plugin-625vc                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 coredns-66bc5c9577-n2ksh                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     75s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 csi-hostpathplugin-kgwc9                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 etcd-addons-450053                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         81s
	  kube-system                 kindnet-w25rx                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      76s
	  kube-system                 kube-apiserver-addons-450053                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-controller-manager-addons-450053                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-proxy-mvm7j                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         76s
	  kube-system                 kube-scheduler-addons-450053                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 metrics-server-85b7d694d7-74pfv                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         74s
	  kube-system                 nvidia-device-plugin-daemonset-hpnrm                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 registry-6b586f9694-48d75                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 registry-creds-764b6fb674-gvfgs                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         75s
	  kube-system                 registry-proxy-l5z45                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         63s
	  kube-system                 snapshot-controller-7d9fbc56b8-c52wh                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 snapshot-controller-7d9fbc56b8-jgxr4                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         73s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  local-path-storage          helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-xkbws                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-289rv                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 73s   kube-proxy       
	  Normal  Starting                 81s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  81s   kubelet          Node addons-450053 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    81s   kubelet          Node addons-450053 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     81s   kubelet          Node addons-450053 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           76s   node-controller  Node addons-450053 event: Registered Node addons-450053 in Controller
	  Normal  NodeReady                63s   kubelet          Node addons-450053 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff d6 94 eb ee aa b4 08 06
	[Nov23 08:05] IPv4: martian source 10.244.0.1 from 10.244.0.32, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 1a f6 74 30 0c d9 08 06
	[ +26.103023] IPv4: martian source 10.244.0.1 from 10.244.0.33, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 5e 3f 8b 25 7b fe 08 06
	[ +25.798829] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000025] ll header: 00000000: ff ff ff ff ff ff 1a ae 34 db 72 ca 08 06
	[Nov23 08:06] IPv4: martian source 10.244.0.1 from 10.244.0.35, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff be 89 88 33 08 30 08 06
	[ +20.473454] IPv4: martian source 10.244.0.1 from 10.244.0.39, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 72 78 e0 0f f8 9d 08 06
	[Nov23 08:09] IPv4: martian source 10.244.0.1 from 10.244.0.45, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e 40 55 95 e0 41 08 06
	[Nov23 08:10] IPv4: martian source 10.244.0.1 from 10.244.0.46, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 2e cf 8f dc 5f 93 08 06
	[ +22.213064] IPv4: martian source 10.244.0.1 from 10.244.0.47, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d2 f1 94 01 dd 7c 08 06
	[Nov23 08:11] IPv4: martian source 10.244.0.1 from 10.244.0.48, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff b2 e1 6e 8f 1e 9b 08 06
	[Nov23 08:12] IPv4: martian source 10.244.0.1 from 10.244.0.49, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff f2 22 b5 87 6b 30 08 06
	[ +34.772372] IPv4: martian source 10.244.0.1 from 10.244.0.50, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 82 4b 59 78 74 08 06
	[Nov23 08:13] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 73 2a 74 8f 84 08 06
	
	
	==> etcd [58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354] <==
	{"level":"warn","ts":"2025-11-23T08:20:42.561653Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.568446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.574822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.582319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.589761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.598120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.603907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.609784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.616911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.623677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51430","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.629844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.635962Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.641963Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.656735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.663735Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.677158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.680525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.686257Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:42.691982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:53.686006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:20:53.691571Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:21:18.218001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:21:18.224278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:21:18.237440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:58474","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:21:58.015088Z","caller":"traceutil/trace.go:172","msg":"trace[1394206225] transaction","detail":"{read_only:false; response_revision:1230; number_of_response:1; }","duration":"144.604929ms","start":"2025-11-23T08:21:57.870466Z","end":"2025-11-23T08:21:58.015071Z","steps":["trace[1394206225] 'process raft request'  (duration: 112.722637ms)","trace[1394206225] 'compare'  (duration: 31.741613ms)"],"step_count":2}
	
	
	==> gcp-auth [b833e2e0c14c495f4822401a4e678bc4bf1b3c659b58dd7ec5a2f7fb8f13b8e0] <==
	2025/11/23 08:21:45 GCP Auth Webhook started!
	2025/11/23 08:21:54 Ready to marshal response ...
	2025/11/23 08:21:54 Ready to write response ...
	2025/11/23 08:21:54 Ready to marshal response ...
	2025/11/23 08:21:54 Ready to write response ...
	2025/11/23 08:21:54 Ready to marshal response ...
	2025/11/23 08:21:54 Ready to write response ...
	2025/11/23 08:22:05 Ready to marshal response ...
	2025/11/23 08:22:05 Ready to write response ...
	2025/11/23 08:22:05 Ready to marshal response ...
	2025/11/23 08:22:05 Ready to write response ...
	
	
	==> kernel <==
	 08:22:06 up  1:04,  0 user,  load average: 2.47, 1.58, 0.99
	Linux addons-450053 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790] <==
	I1123 08:20:52.892706       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:20:52.911146       1 controller.go:381] "Waiting for informer caches to sync"
	E1123 08:20:52.895534       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 08:20:52.895659       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 08:20:52.897026       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 08:20:52.911338       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:20:52.911511       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 08:20:53.014108       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 08:20:54.612186       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:20:54.612213       1 metrics.go:72] Registering metrics
	I1123 08:20:54.612306       1 controller.go:711] "Syncing nftables rules"
	I1123 08:21:02.893587       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:21:02.893656       1 main.go:301] handling current node
	I1123 08:21:12.892847       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:21:12.892890       1 main.go:301] handling current node
	I1123 08:21:22.892783       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:21:22.892831       1 main.go:301] handling current node
	I1123 08:21:32.893544       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:21:32.893576       1 main.go:301] handling current node
	I1123 08:21:42.892555       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:21:42.892581       1 main.go:301] handling current node
	I1123 08:21:52.892669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:21:52.892736       1 main.go:301] handling current node
	I1123 08:22:02.893110       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:22:02.893144       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635] <==
	I1123 08:20:59.477317       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.96.197.244"}
	W1123 08:21:03.160570       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.197.244:443: connect: connection refused
	W1123 08:21:03.160662       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.197.244:443: connect: connection refused
	E1123 08:21:03.160689       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.197.244:443: connect: connection refused" logger="UnhandledError"
	E1123 08:21:03.160701       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.197.244:443: connect: connection refused" logger="UnhandledError"
	W1123 08:21:03.181288       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.197.244:443: connect: connection refused
	E1123 08:21:03.181329       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.197.244:443: connect: connection refused" logger="UnhandledError"
	W1123 08:21:03.182273       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.96.197.244:443: connect: connection refused
	E1123 08:21:03.182310       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.96.197.244:443: connect: connection refused" logger="UnhandledError"
	W1123 08:21:06.667299       1 handler_proxy.go:99] no RequestInfo found in the context
	E1123 08:21:06.667370       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	E1123 08:21:06.667483       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1123 08:21:06.668111       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	E1123 08:21:06.673539       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	E1123 08:21:06.694571       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.98.189.118:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.98.189.118:443: connect: connection refused" logger="UnhandledError"
	I1123 08:21:06.771359       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1123 08:21:18.217928       1 logging.go:55] [core] [Channel #267 SubChannel #268]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:21:18.224263       1 logging.go:55] [core] [Channel #271 SubChannel #272]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:21:18.237412       1 logging.go:55] [core] [Channel #275 SubChannel #276]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1123 08:21:18.244147       1 logging.go:55] [core] [Channel #279 SubChannel #280]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	E1123 08:22:04.612053       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45424: use of closed network connection
	E1123 08:22:04.758272       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:45442: use of closed network connection
	
	
	==> kube-controller-manager [3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18] <==
	I1123 08:20:50.096083       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:20:50.096092       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:20:50.096048       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 08:20:50.097385       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 08:20:50.097426       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:20:50.097470       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 08:20:50.097488       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:20:50.097522       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:20:50.097537       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:20:50.097560       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:20:50.097915       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:20:50.098198       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:20:50.099835       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 08:20:50.100841       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:20:50.100855       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:20:50.100929       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:20:50.113199       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:20:50.113209       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 08:20:52.217373       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	I1123 08:21:05.035944       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I1123 08:21:20.105383       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1123 08:21:20.105427       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1123 08:21:20.119739       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1123 08:21:20.206233       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:21:20.220687       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7] <==
	I1123 08:20:52.498843       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:20:52.583496       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:20:52.683865       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:20:52.683893       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:20:52.684018       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:20:52.709134       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:20:52.709199       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:20:52.716278       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:20:52.717462       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:20:52.717548       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:20:52.719873       1 config.go:200] "Starting service config controller"
	I1123 08:20:52.719899       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:20:52.719939       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:20:52.719945       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:20:52.719982       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:20:52.719988       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:20:52.720803       1 config.go:309] "Starting node config controller"
	I1123 08:20:52.720850       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:20:52.720907       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:20:52.821009       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:20:52.821082       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:20:52.821136       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29] <==
	E1123 08:20:43.125208       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:20:43.125317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:20:43.125351       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:20:43.125406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:20:43.125506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:20:43.126232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:20:43.126238       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:20:43.126304       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:20:43.126300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:20:43.126375       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:20:43.126393       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:20:43.126423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 08:20:43.126430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:20:43.126476       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:20:43.946157       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 08:20:43.985432       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:20:44.029884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 08:20:44.030774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:20:44.179823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:20:44.211952       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 08:20:44.223951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:20:44.262531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 08:20:44.285648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:20:44.326219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1123 08:20:46.322460       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 08:21:33 addons-450053 kubelet[1287]: I1123 08:21:33.753762    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-625vc" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:21:33 addons-450053 kubelet[1287]: I1123 08:21:33.764102    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-625vc" podStartSLOduration=1.278594839 podStartE2EDuration="30.764085451s" podCreationTimestamp="2025-11-23 08:21:03 +0000 UTC" firstStartedPulling="2025-11-23 08:21:03.609563044 +0000 UTC m=+18.131224360" lastFinishedPulling="2025-11-23 08:21:33.095053671 +0000 UTC m=+47.616714972" observedRunningTime="2025-11-23 08:21:33.763417823 +0000 UTC m=+48.285079142" watchObservedRunningTime="2025-11-23 08:21:33.764085451 +0000 UTC m=+48.285746769"
	Nov 23 08:21:34 addons-450053 kubelet[1287]: I1123 08:21:34.756661    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-625vc" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:21:35 addons-450053 kubelet[1287]: E1123 08:21:35.034758    1287 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 23 08:21:35 addons-450053 kubelet[1287]: E1123 08:21:35.034865    1287 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b765d6ad-2418-44c7-9da3-fb58dc143860-gcr-creds podName:b765d6ad-2418-44c7-9da3-fb58dc143860 nodeName:}" failed. No retries permitted until 2025-11-23 08:22:07.034841716 +0000 UTC m=+81.556503030 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/b765d6ad-2418-44c7-9da3-fb58dc143860-gcr-creds") pod "registry-creds-764b6fb674-gvfgs" (UID: "b765d6ad-2418-44c7-9da3-fb58dc143860") : secret "registry-creds-gcr" not found
	Nov 23 08:21:35 addons-450053 kubelet[1287]: I1123 08:21:35.760743    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-l5z45" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:21:35 addons-450053 kubelet[1287]: I1123 08:21:35.770538    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-l5z45" podStartSLOduration=0.876223954 podStartE2EDuration="32.770520274s" podCreationTimestamp="2025-11-23 08:21:03 +0000 UTC" firstStartedPulling="2025-11-23 08:21:03.68528929 +0000 UTC m=+18.206950599" lastFinishedPulling="2025-11-23 08:21:35.57958562 +0000 UTC m=+50.101246919" observedRunningTime="2025-11-23 08:21:35.769524517 +0000 UTC m=+50.291185836" watchObservedRunningTime="2025-11-23 08:21:35.770520274 +0000 UTC m=+50.292181592"
	Nov 23 08:21:36 addons-450053 kubelet[1287]: I1123 08:21:36.764126    1287 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-l5z45" secret="" err="secret \"gcp-auth\" not found"
	Nov 23 08:21:38 addons-450053 kubelet[1287]: I1123 08:21:38.788875    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-mblm5" podStartSLOduration=17.76141912 podStartE2EDuration="46.788850223s" podCreationTimestamp="2025-11-23 08:20:52 +0000 UTC" firstStartedPulling="2025-11-23 08:21:09.633070465 +0000 UTC m=+24.154731763" lastFinishedPulling="2025-11-23 08:21:38.660501557 +0000 UTC m=+53.182162866" observedRunningTime="2025-11-23 08:21:38.788593954 +0000 UTC m=+53.310255272" watchObservedRunningTime="2025-11-23 08:21:38.788850223 +0000 UTC m=+53.310511543"
	Nov 23 08:21:42 addons-450053 kubelet[1287]: I1123 08:21:42.798785    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-k5xk4" podStartSLOduration=27.822644377 podStartE2EDuration="50.798766289s" podCreationTimestamp="2025-11-23 08:20:52 +0000 UTC" firstStartedPulling="2025-11-23 08:21:19.125047612 +0000 UTC m=+33.646708915" lastFinishedPulling="2025-11-23 08:21:42.101169514 +0000 UTC m=+56.622830827" observedRunningTime="2025-11-23 08:21:42.79762373 +0000 UTC m=+57.319285049" watchObservedRunningTime="2025-11-23 08:21:42.798766289 +0000 UTC m=+57.320427610"
	Nov 23 08:21:45 addons-450053 kubelet[1287]: I1123 08:21:45.810916    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-mxx49" podStartSLOduration=20.584931111 podStartE2EDuration="46.810894949s" podCreationTimestamp="2025-11-23 08:20:59 +0000 UTC" firstStartedPulling="2025-11-23 08:21:19.129683008 +0000 UTC m=+33.651344306" lastFinishedPulling="2025-11-23 08:21:45.355646843 +0000 UTC m=+59.877308144" observedRunningTime="2025-11-23 08:21:45.809557862 +0000 UTC m=+60.331219182" watchObservedRunningTime="2025-11-23 08:21:45.810894949 +0000 UTC m=+60.332556268"
	Nov 23 08:21:49 addons-450053 kubelet[1287]: I1123 08:21:49.600903    1287 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 23 08:21:49 addons-450053 kubelet[1287]: I1123 08:21:49.600954    1287 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 23 08:21:51 addons-450053 kubelet[1287]: I1123 08:21:51.559367    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8f20afe-9f19-49b8-b75a-869c3ad17169" path="/var/lib/kubelet/pods/a8f20afe-9f19-49b8-b75a-869c3ad17169/volumes"
	Nov 23 08:21:51 addons-450053 kubelet[1287]: I1123 08:21:51.848862    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-kgwc9" podStartSLOduration=1.5701524249999999 podStartE2EDuration="48.848837896s" podCreationTimestamp="2025-11-23 08:21:03 +0000 UTC" firstStartedPulling="2025-11-23 08:21:03.586153631 +0000 UTC m=+18.107814929" lastFinishedPulling="2025-11-23 08:21:50.864839101 +0000 UTC m=+65.386500400" observedRunningTime="2025-11-23 08:21:51.847792763 +0000 UTC m=+66.369454096" watchObservedRunningTime="2025-11-23 08:21:51.848837896 +0000 UTC m=+66.370499214"
	Nov 23 08:21:54 addons-450053 kubelet[1287]: I1123 08:21:54.585424    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/822ae99b-9daa-4f74-b15f-1da49fbcf1fe-gcp-creds\") pod \"busybox\" (UID: \"822ae99b-9daa-4f74-b15f-1da49fbcf1fe\") " pod="default/busybox"
	Nov 23 08:21:54 addons-450053 kubelet[1287]: I1123 08:21:54.585544    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbpgt\" (UniqueName: \"kubernetes.io/projected/822ae99b-9daa-4f74-b15f-1da49fbcf1fe-kube-api-access-bbpgt\") pod \"busybox\" (UID: \"822ae99b-9daa-4f74-b15f-1da49fbcf1fe\") " pod="default/busybox"
	Nov 23 08:21:58 addons-450053 kubelet[1287]: I1123 08:21:58.017575    1287 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.737987032 podStartE2EDuration="4.017559538s" podCreationTimestamp="2025-11-23 08:21:54 +0000 UTC" firstStartedPulling="2025-11-23 08:21:54.788369015 +0000 UTC m=+69.310030312" lastFinishedPulling="2025-11-23 08:21:57.067941502 +0000 UTC m=+71.589602818" observedRunningTime="2025-11-23 08:21:58.017222238 +0000 UTC m=+72.538883555" watchObservedRunningTime="2025-11-23 08:21:58.017559538 +0000 UTC m=+72.539220855"
	Nov 23 08:21:59 addons-450053 kubelet[1287]: I1123 08:21:59.559959    1287 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9fbfdf9c-cf24-4bbd-beb5-59167b154363" path="/var/lib/kubelet/pods/9fbfdf9c-cf24-4bbd-beb5-59167b154363/volumes"
	Nov 23 08:22:04 addons-450053 kubelet[1287]: E1123 08:22:04.611914    1287 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50502->127.0.0.1:34493: write tcp 127.0.0.1:50502->127.0.0.1:34493: write: broken pipe
	Nov 23 08:22:04 addons-450053 kubelet[1287]: E1123 08:22:04.758256    1287 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:50516->127.0.0.1:34493: write tcp 127.0.0.1:50516->127.0.0.1:34493: write: broken pipe
	Nov 23 08:22:05 addons-450053 kubelet[1287]: I1123 08:22:05.365663    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/1630b01f-3d3f-4e8a-8a8f-33e7461656ff-script\") pod \"helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240\" (UID: \"1630b01f-3d3f-4e8a-8a8f-33e7461656ff\") " pod="local-path-storage/helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240"
	Nov 23 08:22:05 addons-450053 kubelet[1287]: I1123 08:22:05.365716    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1630b01f-3d3f-4e8a-8a8f-33e7461656ff-gcp-creds\") pod \"helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240\" (UID: \"1630b01f-3d3f-4e8a-8a8f-33e7461656ff\") " pod="local-path-storage/helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240"
	Nov 23 08:22:05 addons-450053 kubelet[1287]: I1123 08:22:05.365793    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/1630b01f-3d3f-4e8a-8a8f-33e7461656ff-data\") pod \"helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240\" (UID: \"1630b01f-3d3f-4e8a-8a8f-33e7461656ff\") " pod="local-path-storage/helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240"
	Nov 23 08:22:05 addons-450053 kubelet[1287]: I1123 08:22:05.365822    1287 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphts\" (UniqueName: \"kubernetes.io/projected/1630b01f-3d3f-4e8a-8a8f-33e7461656ff-kube-api-access-rphts\") pod \"helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240\" (UID: \"1630b01f-3d3f-4e8a-8a8f-33e7461656ff\") " pod="local-path-storage/helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240"
	
	
	==> storage-provisioner [4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473] <==
	W1123 08:21:41.813349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:43.816829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:43.820363       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:45.822858       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:45.826619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:47.829460       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:47.833109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:49.835706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:49.840139       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:51.842759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:51.848686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:53.852124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:53.856844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:55.859674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:55.863588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:57.866281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:57.886847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:59.899329       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:21:59.914108       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:22:01.916728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:22:01.921781       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:22:03.924642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:22:03.928266       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:22:05.931797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:22:05.936644       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-450053 -n addons-450053
helpers_test.go:269: (dbg) Run:  kubectl --context addons-450053 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n registry-creds-764b6fb674-gvfgs helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-450053 describe pod test-local-path ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n registry-creds-764b6fb674-gvfgs helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-450053 describe pod test-local-path ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n registry-creds-764b6fb674-gvfgs helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240: exit status 1 (68.331539ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-q9gjh (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-q9gjh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-ll2hd" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-5fm9n" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-gvfgs" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-450053 describe pod test-local-path ingress-nginx-admission-create-ll2hd ingress-nginx-admission-patch-5fm9n registry-creds-764b6fb674-gvfgs helper-pod-create-pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable headlamp --alsologtostderr -v=1: exit status 11 (258.227617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:07.467317  117600 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:07.467597  117600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:07.467608  117600 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:07.467612  117600 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:07.467823  117600 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:07.468111  117600 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:07.468443  117600 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:07.468459  117600 addons.go:622] checking whether the cluster is paused
	I1123 08:22:07.468537  117600 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:07.468552  117600 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:07.468914  117600 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:07.488719  117600 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:07.488780  117600 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:07.508745  117600 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:07.610611  117600 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:07.610726  117600 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:07.641490  117600 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:07.641513  117600 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:07.641519  117600 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:07.641524  117600 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:07.641528  117600 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:07.641533  117600 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:07.641537  117600 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:07.641545  117600 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:07.641549  117600 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:07.641557  117600 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:07.641562  117600 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:07.641566  117600 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:07.641594  117600 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:07.641602  117600 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:07.641645  117600 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:07.641664  117600 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:07.641669  117600 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:07.641675  117600 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:07.641678  117600 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:07.641682  117600 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:07.641686  117600 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:07.641691  117600 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:07.641695  117600 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:07.641699  117600 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:07.641703  117600 cri.go:89] found id: ""
	I1123 08:22:07.641758  117600 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:07.655936  117600 out.go:203] 
	W1123 08:22:07.657204  117600 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:07.657226  117600 out.go:285] * 
	* 
	W1123 08:22:07.660260  117600 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:07.661505  117600 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.66s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-4vxx9" [d30c8f34-a76d-41a3-9857-def75c374945] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003818567s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (246.163624ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:12.730996  117863 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:12.731252  117863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:12.731263  117863 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:12.731268  117863 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:12.731524  117863 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:12.731941  117863 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:12.732402  117863 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:12.732426  117863 addons.go:622] checking whether the cluster is paused
	I1123 08:22:12.732535  117863 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:12.732563  117863 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:12.733089  117863 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:12.751238  117863 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:12.751306  117863 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:12.768712  117863 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:12.868523  117863 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:12.868605  117863 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:12.897120  117863 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:12.897142  117863 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:12.897148  117863 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:12.897153  117863 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:12.897157  117863 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:12.897163  117863 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:12.897167  117863 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:12.897171  117863 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:12.897176  117863 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:12.897191  117863 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:12.897199  117863 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:12.897204  117863 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:12.897209  117863 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:12.897214  117863 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:12.897221  117863 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:12.897229  117863 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:12.897236  117863 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:12.897242  117863 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:12.897246  117863 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:12.897253  117863 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:12.897264  117863 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:12.897273  117863 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:12.897278  117863 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:12.897286  117863 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:12.897291  117863 cri.go:89] found id: ""
	I1123 08:22:12.897338  117863 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:12.911025  117863 out.go:203] 
	W1123 08:22:12.912271  117863 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:12.912294  117863 out.go:285] * 
	* 
	W1123 08:22:12.915344  117863 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:12.916490  117863 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-450053 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-450053 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-450053 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [5cdc1978-7eff-4e24-8c1e-b62999a458c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [5cdc1978-7eff-4e24-8c1e-b62999a458c5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [5cdc1978-7eff-4e24-8c1e-b62999a458c5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002940012s
addons_test.go:967: (dbg) Run:  kubectl --context addons-450053 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 ssh "cat /opt/local-path-provisioner/pvc-3787c6ea-829c-4a2e-a5fe-c888ef0ec240_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-450053 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-450053 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (262.011326ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:14.960459  118139 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:14.960835  118139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:14.960846  118139 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:14.960851  118139 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:14.961062  118139 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:14.961353  118139 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:14.961687  118139 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:14.961703  118139 addons.go:622] checking whether the cluster is paused
	I1123 08:22:14.961790  118139 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:14.961809  118139 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:14.962285  118139 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:14.980329  118139 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:14.980388  118139 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:14.997803  118139 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:15.099892  118139 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:15.100012  118139 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:15.132344  118139 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:15.132375  118139 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:15.132379  118139 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:15.132382  118139 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:15.132385  118139 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:15.132389  118139 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:15.132392  118139 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:15.132395  118139 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:15.132398  118139 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:15.132409  118139 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:15.132414  118139 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:15.132418  118139 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:15.132422  118139 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:15.132427  118139 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:15.132432  118139 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:15.132442  118139 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:15.132451  118139 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:15.132457  118139 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:15.132462  118139 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:15.132466  118139 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:15.132471  118139 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:15.132480  118139 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:15.132483  118139 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:15.132486  118139 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:15.132489  118139 cri.go:89] found id: ""
	I1123 08:22:15.132555  118139 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:15.149958  118139 out.go:203] 
	W1123 08:22:15.151523  118139 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:15.151548  118139 out.go:285] * 
	* 
	W1123 08:22:15.156243  118139 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:15.157741  118139 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.16s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-hpnrm" [f84547e5-5d46-4cfc-874a-413b67ecdb49] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003570995s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (259.78694ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:10.073909  117728 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:10.074193  117728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:10.074204  117728 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:10.074208  117728 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:10.074387  117728 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:10.074679  117728 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:10.075025  117728 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:10.075040  117728 addons.go:622] checking whether the cluster is paused
	I1123 08:22:10.075125  117728 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:10.075142  117728 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:10.075520  117728 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:10.094563  117728 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:10.094629  117728 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:10.112442  117728 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:10.211576  117728 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:10.211659  117728 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:10.242600  117728 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:10.242625  117728 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:10.242630  117728 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:10.242634  117728 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:10.242637  117728 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:10.242641  117728 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:10.242646  117728 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:10.242650  117728 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:10.242656  117728 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:10.242679  117728 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:10.242690  117728 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:10.242695  117728 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:10.242700  117728 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:10.242704  117728 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:10.242709  117728 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:10.242719  117728 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:10.242727  117728 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:10.242733  117728 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:10.242737  117728 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:10.242741  117728 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:10.242744  117728 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:10.242748  117728 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:10.242753  117728 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:10.242757  117728 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:10.242761  117728 cri.go:89] found id: ""
	I1123 08:22:10.242811  117728 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:10.259575  117728 out.go:203] 
	W1123 08:22:10.260829  117728 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:10.260864  117728 out.go:285] * 
	* 
	W1123 08:22:10.265704  117728 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:10.267834  117728 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.27s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.31s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-289rv" [92f3be7b-be06-4d21-8fa7-864587fad62f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003434807s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable yakd --alsologtostderr -v=1: exit status 11 (303.494617ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:29.340823  119958 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:29.341127  119958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:29.341140  119958 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:29.341147  119958 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:29.341486  119958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:29.341792  119958 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:29.342709  119958 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:29.342730  119958 addons.go:622] checking whether the cluster is paused
	I1123 08:22:29.342815  119958 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:29.342832  119958 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:29.343338  119958 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:29.368999  119958 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:29.369085  119958 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:29.393158  119958 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:29.505134  119958 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:29.505214  119958 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:29.535295  119958 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:29.535323  119958 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:29.535330  119958 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:29.535334  119958 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:29.535339  119958 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:29.535343  119958 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:29.535347  119958 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:29.535352  119958 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:29.535356  119958 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:29.535364  119958 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:29.535369  119958 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:29.535374  119958 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:29.535380  119958 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:29.535386  119958 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:29.535392  119958 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:29.535409  119958 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:29.535417  119958 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:29.535424  119958 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:29.535428  119958 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:29.535432  119958 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:29.535437  119958 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:29.535445  119958 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:29.535451  119958 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:29.535458  119958 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:29.535464  119958 cri.go:89] found id: ""
	I1123 08:22:29.535509  119958 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:29.552851  119958 out.go:203] 
	W1123 08:22:29.554379  119958 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:29.554405  119958 out.go:285] * 
	* 
	W1123 08:22:29.559266  119958 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:29.560736  119958 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.31s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-625vc" [c5f91220-0c10-421a-80d5-efb93906fabe] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003558444s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-450053 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-450053 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (254.726193ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:22:25.535778  119666 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:22:25.535932  119666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:25.535939  119666 out.go:374] Setting ErrFile to fd 2...
	I1123 08:22:25.535945  119666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:22:25.536261  119666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:22:25.536578  119666 mustload.go:66] Loading cluster: addons-450053
	I1123 08:22:25.537142  119666 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:25.537174  119666 addons.go:622] checking whether the cluster is paused
	I1123 08:22:25.537318  119666 config.go:182] Loaded profile config "addons-450053": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:22:25.537354  119666 host.go:66] Checking if "addons-450053" exists ...
	I1123 08:22:25.537878  119666 cli_runner.go:164] Run: docker container inspect addons-450053 --format={{.State.Status}}
	I1123 08:22:25.557104  119666 ssh_runner.go:195] Run: systemctl --version
	I1123 08:22:25.557167  119666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-450053
	I1123 08:22:25.575200  119666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/addons-450053/id_rsa Username:docker}
	I1123 08:22:25.679596  119666 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 08:22:25.679684  119666 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 08:22:25.709202  119666 cri.go:89] found id: "738d8d379f2513ebbed6c9882209756963a949bde3ed19ade5de8580001c43b6"
	I1123 08:22:25.709297  119666 cri.go:89] found id: "d984a1356e5ecf35be65e8fc6e7992bb042d8927a704c9b1e8331c05254332d5"
	I1123 08:22:25.709311  119666 cri.go:89] found id: "f1bd36bf8d3aa419e06a2d8728e06eef3a4eb3bac9a5f4c3b24fff0f491bdd61"
	I1123 08:22:25.709317  119666 cri.go:89] found id: "e39671b6291757e254f89dc6033c7d24376b7c7120673820ff9f2cd071649ede"
	I1123 08:22:25.709322  119666 cri.go:89] found id: "524005afa9256011512767926b02159bfbb545a2d097df64aeda6918b32cfbaa"
	I1123 08:22:25.709328  119666 cri.go:89] found id: "9989944eaa26fdbd8c011baeec7cf3efbfbbe246f5276b6ceecbd64d61294399"
	I1123 08:22:25.709332  119666 cri.go:89] found id: "e3688d5b85c227523b5a3ce94991d4ee820fdc1ae296225f370587505ff591b6"
	I1123 08:22:25.709337  119666 cri.go:89] found id: "8ecc013e239af1858173ffe38500069f30090d7c4a8d2e55e0cf7931a593fbbe"
	I1123 08:22:25.709341  119666 cri.go:89] found id: "1dfc56fc8d94b1225a098a523c9650f6663217b21237541dc906578e3effc03d"
	I1123 08:22:25.709351  119666 cri.go:89] found id: "227f1cba9bc38078f86a2ee004edc57f34ac09f7aae18e70a35257d97524a389"
	I1123 08:22:25.709355  119666 cri.go:89] found id: "878966c2c1dd7601f149f13eb451daa7034eebd08cef35eebb83a577b882ce48"
	I1123 08:22:25.709358  119666 cri.go:89] found id: "f9cd2adc0709d244a2c7bc3357291110cd3b690d9689c58d1d015c5371f7f2ca"
	I1123 08:22:25.709364  119666 cri.go:89] found id: "a6ff371d12340c0a9617d886be8620819d349d024e915a5c18777920e9522800"
	I1123 08:22:25.709367  119666 cri.go:89] found id: "8364e195c165b56eaa9cee7e25199a566d7f232fea45a9c0da829ce74e7a169e"
	I1123 08:22:25.709370  119666 cri.go:89] found id: "bca140d99c87f34e3a5c81b3e3f53364fd36a08c860a55709db43ad1f00c7bd8"
	I1123 08:22:25.709377  119666 cri.go:89] found id: "0e62c249e71fecd3ff09a415c2a850ba5eb56735172347f36a18693f8631498e"
	I1123 08:22:25.709382  119666 cri.go:89] found id: "4ea39bfdb1b8efcabd00b3a8f0c5de2e2517bbf238a941a88f165c0fd97b9473"
	I1123 08:22:25.709386  119666 cri.go:89] found id: "fc8e0ddc56a4bb558f8c5f776af3a634abca3c1b4e966b269f9b6b6d44daab9e"
	I1123 08:22:25.709388  119666 cri.go:89] found id: "f5b7d2b9fc7fd00d96afc93889d6409f70924a5f3c846c8b45a49812283c03d7"
	I1123 08:22:25.709391  119666 cri.go:89] found id: "204df826a5f7fe5bc267f72965e69d0d4468a7d03c4f27b3c4dbc85558bcb790"
	I1123 08:22:25.709402  119666 cri.go:89] found id: "2b03a5a989737025a7389ea9ffc8f4440c697509dd7c3423436b93deabfc0635"
	I1123 08:22:25.709409  119666 cri.go:89] found id: "5ce09f86a113c208d0fb8a89714d1855c858b529d081cf18c7e93d6084542e29"
	I1123 08:22:25.709411  119666 cri.go:89] found id: "3d0e901e59417cd60672ef724e6e4202b2c13b0499e5b198455979ade2929d18"
	I1123 08:22:25.709416  119666 cri.go:89] found id: "58c0dd74075cff851380d2c9c1fbd280646b92ecf56beb6cd7c81444769b9354"
	I1123 08:22:25.709423  119666 cri.go:89] found id: ""
	I1123 08:22:25.709464  119666 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 08:22:25.724283  119666 out.go:203] 
	W1123 08:22:25.726179  119666 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:22:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 08:22:25.726199  119666 out.go:285] * 
	* 
	W1123 08:22:25.729313  119666 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 08:22:25.730480  119666 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-450053 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-709702 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-709702 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-7c655" [286a1604-e9e0-40f6-a37b-c01087b916d0] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-709702 -n functional-709702
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-23 08:38:19.103579746 +0000 UTC m=+1118.201940553
functional_test.go:1645: (dbg) Run:  kubectl --context functional-709702 describe po hello-node-connect-7d85dfc575-7c655 -n default
functional_test.go:1645: (dbg) kubectl --context functional-709702 describe po hello-node-connect-7d85dfc575-7c655 -n default:
Name:             hello-node-connect-7d85dfc575-7c655
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-709702/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:28:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtfrd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wtfrd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7c655 to functional-709702
Normal   Pulling    7m5s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m44s (x21 over 9m55s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m44s (x21 over 9m55s)  kubelet            Error: ImagePullBackOff
functional_test.go:1645: (dbg) Run:  kubectl --context functional-709702 logs hello-node-connect-7d85dfc575-7c655 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-709702 logs hello-node-connect-7d85dfc575-7c655 -n default: exit status 1 (67.267317ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7c655" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-709702 logs hello-node-connect-7d85dfc575-7c655 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-709702 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-7c655
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-709702/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:28:18 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.7
IPs:
IP:           10.244.0.7
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtfrd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-wtfrd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7c655 to functional-709702
Normal   Pulling    7m5s (x5 over 9m59s)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 9m56s)    kubelet            Error: ErrImagePull
Normal   BackOff    4m44s (x21 over 9m55s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m44s (x21 over 9m55s)  kubelet            Error: ImagePullBackOff

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-709702 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-709702 logs -l app=hello-node-connect: exit status 1 (62.002992ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-7c655" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-709702 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-709702 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.110.169.121
IPs:                      10.110.169.121
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32319/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-709702
helpers_test.go:243: (dbg) docker inspect functional-709702:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "216afbd2f8ad49a98532a0333c589d8b4c5c30ea4e68360c94737c7f9c685525",
	        "Created": "2025-11-23T08:26:04.291260813Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 130723,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T08:26:04.324078592Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/216afbd2f8ad49a98532a0333c589d8b4c5c30ea4e68360c94737c7f9c685525/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/216afbd2f8ad49a98532a0333c589d8b4c5c30ea4e68360c94737c7f9c685525/hostname",
	        "HostsPath": "/var/lib/docker/containers/216afbd2f8ad49a98532a0333c589d8b4c5c30ea4e68360c94737c7f9c685525/hosts",
	        "LogPath": "/var/lib/docker/containers/216afbd2f8ad49a98532a0333c589d8b4c5c30ea4e68360c94737c7f9c685525/216afbd2f8ad49a98532a0333c589d8b4c5c30ea4e68360c94737c7f9c685525-json.log",
	        "Name": "/functional-709702",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-709702:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-709702",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "216afbd2f8ad49a98532a0333c589d8b4c5c30ea4e68360c94737c7f9c685525",
	                "LowerDir": "/var/lib/docker/overlay2/33b0ee2cd4802944bec8142ec1b85353102bb7b6add143ae32801e773df13256-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/33b0ee2cd4802944bec8142ec1b85353102bb7b6add143ae32801e773df13256/merged",
	                "UpperDir": "/var/lib/docker/overlay2/33b0ee2cd4802944bec8142ec1b85353102bb7b6add143ae32801e773df13256/diff",
	                "WorkDir": "/var/lib/docker/overlay2/33b0ee2cd4802944bec8142ec1b85353102bb7b6add143ae32801e773df13256/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-709702",
	                "Source": "/var/lib/docker/volumes/functional-709702/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-709702",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-709702",
	                "name.minikube.sigs.k8s.io": "functional-709702",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "bb7a60e02c3ccce723572ce3af503f8f332eb5b028f88de18c7830452693fa84",
	            "SandboxKey": "/var/run/docker/netns/bb7a60e02c3c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-709702": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f1b11b0d6bc83af209844e6d1314ad7fd1eceeba27d8b858ed1f3069ca81ecb2",
	                    "EndpointID": "111de4c7470de362dca668cb5e533f6c86a784b485ef5aaf0b6ff26ef8ec49cc",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "4e:dc:3b:b7:28:0f",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-709702",
	                        "216afbd2f8ad"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-709702 -n functional-709702
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 logs -n 25: (1.36732552s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-709702 ssh findmnt -T /mount-9p | grep 9p                                                               │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ ssh            │ functional-709702 ssh -- ls -la /mount-9p                                                                          │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ ssh            │ functional-709702 ssh sudo umount -f /mount-9p                                                                     │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ ssh            │ functional-709702 ssh findmnt -T /mount1                                                                           │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ mount          │ -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount1 --alsologtostderr -v=1 │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ mount          │ -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount2 --alsologtostderr -v=1 │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ mount          │ -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount3 --alsologtostderr -v=1 │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ ssh            │ functional-709702 ssh findmnt -T /mount1                                                                           │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ ssh            │ functional-709702 ssh findmnt -T /mount2                                                                           │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ ssh            │ functional-709702 ssh findmnt -T /mount3                                                                           │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ mount          │ -p functional-709702 --kill=true                                                                                   │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ license        │                                                                                                                    │ minikube          │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ start          │ -p functional-709702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio          │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ start          │ -p functional-709702 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                    │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-709702 --alsologtostderr -v=1                                                     │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ update-context │ functional-709702 update-context --alsologtostderr -v=2                                                            │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ update-context │ functional-709702 update-context --alsologtostderr -v=2                                                            │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ update-context │ functional-709702 update-context --alsologtostderr -v=2                                                            │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ image          │ functional-709702 image ls --format short --alsologtostderr                                                        │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ ssh            │ functional-709702 ssh pgrep buildkitd                                                                              │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │                     │
	│ image          │ functional-709702 image build -t localhost/my-image:functional-709702 testdata/build --alsologtostderr             │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ image          │ functional-709702 image ls                                                                                         │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ image          │ functional-709702 image ls --format json --alsologtostderr                                                         │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ image          │ functional-709702 image ls --format table --alsologtostderr                                                        │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	│ image          │ functional-709702 image ls --format yaml --alsologtostderr                                                         │ functional-709702 │ jenkins │ v1.37.0 │ 23 Nov 25 08:28 UTC │ 23 Nov 25 08:28 UTC │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:28:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:28:41.859299  145844 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:28:41.859410  145844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:41.859415  145844 out.go:374] Setting ErrFile to fd 2...
	I1123 08:28:41.859419  145844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:41.859656  145844 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:28:41.860142  145844 out.go:368] Setting JSON to false
	I1123 08:28:41.861348  145844 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4262,"bootTime":1763882260,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:28:41.861407  145844 start.go:143] virtualization: kvm guest
	I1123 08:28:41.863896  145844 out.go:179] * [functional-709702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:28:41.865240  145844 notify.go:221] Checking for updates...
	I1123 08:28:41.865278  145844 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:28:41.866751  145844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:28:41.868119  145844 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:28:41.869458  145844 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:28:41.870535  145844 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:28:41.873421  145844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:28:41.875399  145844 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:28:41.875911  145844 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:28:41.904812  145844 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:28:41.904994  145844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:28:41.977515  145844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-23 08:28:41.964728617 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:28:41.977779  145844 docker.go:319] overlay module found
	I1123 08:28:41.979484  145844 out.go:179] * Using the docker driver based on existing profile
	I1123 08:28:41.980626  145844 start.go:309] selected driver: docker
	I1123 08:28:41.980641  145844 start.go:927] validating driver "docker" against &{Name:functional-709702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-709702 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:28:41.980753  145844 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:28:41.980845  145844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:28:42.050002  145844 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-23 08:28:42.038441866 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:28:42.050600  145844 cni.go:84] Creating CNI manager for ""
	I1123 08:28:42.050654  145844 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:28:42.050736  145844 start.go:353] cluster config:
	{Name:functional-709702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-709702 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:28:42.053851  145844 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 23 08:28:45 functional-709702 crio[3613]: time="2025-11-23T08:28:45.124464316Z" level=info msg="Started container" PID=7665 containerID=6af513bdab77f055cd64b83680a031bbeb49c4595b8d9d69a61bef63710f8024 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-pjb5x/dashboard-metrics-scraper id=0bf1fe3e-8c21-4060-85df-f85b87506e1b name=/runtime.v1.RuntimeService/StartContainer sandboxID=916ac6ee02c7e57f8112f8f106416cd1680bf8df45f19f229cdb17651fbd04e4
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.724684772Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=8bc39936-dc02-4027-982c-678d76697cb4 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.72534938Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=487368da-fbf7-4a37-92f5-b936f06082c0 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.727093331Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=8d44d6d9-15a9-42da-88bb-243d44448d00 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.73147708Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h7j8m/kubernetes-dashboard" id=7a8c355c-65ca-49d8-b4ab-8a02a947af66 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.731612886Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.736190936Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.736418504Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2212d1fc9144390c08fa5a32807b381e186c7e2211f03125f231ae5e2a40507d/merged/etc/group: no such file or directory"
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.736839632Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.764017632Z" level=info msg="Created container 91f6d861d05faec2d699633d9dbe4b57fb238a6f5fe88f93187741e151f01491: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h7j8m/kubernetes-dashboard" id=7a8c355c-65ca-49d8-b4ab-8a02a947af66 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.764757155Z" level=info msg="Starting container: 91f6d861d05faec2d699633d9dbe4b57fb238a6f5fe88f93187741e151f01491" id=147ee35f-0ec6-44d0-b366-4635a8d8b081 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 08:28:48 functional-709702 crio[3613]: time="2025-11-23T08:28:48.76670507Z" level=info msg="Started container" PID=8061 containerID=91f6d861d05faec2d699633d9dbe4b57fb238a6f5fe88f93187741e151f01491 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-h7j8m/kubernetes-dashboard id=147ee35f-0ec6-44d0-b366-4635a8d8b081 name=/runtime.v1.RuntimeService/StartContainer sandboxID=904ff0caa1d95d771a615862530cdcf0116efa985d084f2c32719fbb678fb528
	Nov 23 08:29:04 functional-709702 crio[3613]: time="2025-11-23T08:29:04.130223751Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=f3bc1df3-b46f-41dc-86a3-bb8cbf8a1bef name=/runtime.v1.ImageService/PullImage
	Nov 23 08:29:05 functional-709702 crio[3613]: time="2025-11-23T08:29:05.130913462Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4911c1c3-80e0-48f8-92df-ddeb85830925 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:29:20 functional-709702 crio[3613]: time="2025-11-23T08:29:20.545085801Z" level=info msg="Stopping pod sandbox: 35a177a84b88cb69dac0bf8806fcb4d61858d92ff544c04722a8e3a9caf56f64" id=ded3b193-a6f8-46bf-a78b-00110d5b33e5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:29:20 functional-709702 crio[3613]: time="2025-11-23T08:29:20.545142087Z" level=info msg="Stopped pod sandbox (already stopped): 35a177a84b88cb69dac0bf8806fcb4d61858d92ff544c04722a8e3a9caf56f64" id=ded3b193-a6f8-46bf-a78b-00110d5b33e5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 23 08:29:20 functional-709702 crio[3613]: time="2025-11-23T08:29:20.545473029Z" level=info msg="Removing pod sandbox: 35a177a84b88cb69dac0bf8806fcb4d61858d92ff544c04722a8e3a9caf56f64" id=fb41d42e-f813-489f-b8d2-5d1c478cc6fe name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:29:20 functional-709702 crio[3613]: time="2025-11-23T08:29:20.548192001Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 08:29:20 functional-709702 crio[3613]: time="2025-11-23T08:29:20.548247967Z" level=info msg="Removed pod sandbox: 35a177a84b88cb69dac0bf8806fcb4d61858d92ff544c04722a8e3a9caf56f64" id=fb41d42e-f813-489f-b8d2-5d1c478cc6fe name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 23 08:29:46 functional-709702 crio[3613]: time="2025-11-23T08:29:46.13103236Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=df401146-5f07-44a3-bc30-d25283fee58b name=/runtime.v1.ImageService/PullImage
	Nov 23 08:29:57 functional-709702 crio[3613]: time="2025-11-23T08:29:57.131005167Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5a91a7b4-1ffe-436b-ac18-f02e52724fd5 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:31:14 functional-709702 crio[3613]: time="2025-11-23T08:31:14.131159447Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=19220f1a-cf59-42b4-b2fb-1f297752862c name=/runtime.v1.ImageService/PullImage
	Nov 23 08:31:27 functional-709702 crio[3613]: time="2025-11-23T08:31:27.130425187Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4905e8bc-c544-4c6c-8466-68574a14c2b4 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:34:03 functional-709702 crio[3613]: time="2025-11-23T08:34:03.13127631Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=36c661cf-fbe9-447d-b95f-cc38a8867ea5 name=/runtime.v1.ImageService/PullImage
	Nov 23 08:34:13 functional-709702 crio[3613]: time="2025-11-23T08:34:13.131014906Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=502aeea8-d271-4467-9f6f-fdeed1f1c385 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	91f6d861d05fa       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   904ff0caa1d95       kubernetes-dashboard-855c9754f9-h7j8m        kubernetes-dashboard
	6af513bdab77f       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   916ac6ee02c7e       dashboard-metrics-scraper-77bf4d6c4c-pjb5x   kubernetes-dashboard
	47565ed7ba808       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   59c8a517bfc55       sp-pod                                       default
	9eddb7fc81932       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   d798e1f1885ea       busybox-mount                                default
	2e92d6e766cc8       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   716de40dafc1a       mysql-5bb876957f-8tl2w                       default
	8a55a522cc48f       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   74037c42591da       nginx-svc                                    default
	c6c51a29f05e2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   3509cf5066d5a       storage-provisioner                          kube-system
	11d1506e84149       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   3d4780bd29d18       kube-apiserver-functional-709702             kube-system
	3ec738b7f6485       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   2234e8a1c77ed       kube-scheduler-functional-709702             kube-system
	a55bff35bb726       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   f6f7afb99a435       kube-controller-manager-functional-709702    kube-system
	a339ac88f7af5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   d589373bebbe1       etcd-functional-709702                       kube-system
	c421fa4177407       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         1                   3509cf5066d5a       storage-provisioner                          kube-system
	39352c1462cf5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   655e0524a8d31       coredns-66bc5c9577-g47kg                     kube-system
	d4b57f61091c4       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   43259e85e64a0       kindnet-42bw8                                kube-system
	a4cb2fabc0d7a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   e8cfdcff4e789       kube-proxy-pgtw4                             kube-system
	e97a7ec171bf8       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   655e0524a8d31       coredns-66bc5c9577-g47kg                     kube-system
	3925fc1354683       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   e8cfdcff4e789       kube-proxy-pgtw4                             kube-system
	215a119be6149       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   43259e85e64a0       kindnet-42bw8                                kube-system
	da4286c0105b1       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 12 minutes ago      Exited              kube-scheduler              0                   2234e8a1c77ed       kube-scheduler-functional-709702             kube-system
	e65c83bb9cce1       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 12 minutes ago      Exited              kube-controller-manager     0                   f6f7afb99a435       kube-controller-manager-functional-709702    kube-system
	f5bb711a72f81       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   d589373bebbe1       etcd-functional-709702                       kube-system
	
	
	==> coredns [39352c1462cf5f072920c27c590d9b6f24c5142c770fdb8dee6a82f437434e1c] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60578 - 62607 "HINFO IN 127163267083616212.3085087320499583388. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.074253935s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": net/http: TLS handshake timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [e97a7ec171bf89feaae1ccf445fd52502a352208ab35ed70a88865b733a29abb] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51548 - 18176 "HINFO IN 4393624731965062689.4856555496918680544. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.074068025s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-709702
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-709702
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=functional-709702
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T08_26_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 08:26:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-709702
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 08:38:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 08:37:42 +0000   Sun, 23 Nov 2025 08:26:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 08:37:42 +0000   Sun, 23 Nov 2025 08:26:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 08:37:42 +0000   Sun, 23 Nov 2025 08:26:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 08:37:42 +0000   Sun, 23 Nov 2025 08:26:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-709702
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                a7d3d56b-50fd-4fb2-811b-9c8303717731
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-hg4pq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-7c655           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-8tl2w                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m49s
	  kube-system                 coredns-66bc5c9577-g47kg                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-709702                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-42bw8                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-709702              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-709702     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-pgtw4                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-709702              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-pjb5x    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-h7j8m         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-709702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-709702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-709702 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-709702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-709702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-709702 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-709702 event: Registered Node functional-709702 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-709702 status is now: NodeReady
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-709702 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-709702 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-709702 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-709702 event: Registered Node functional-709702 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 82 4b 59 78 74 08 06
	[Nov23 08:13] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 73 2a 74 8f 84 08 06
	[Nov23 08:22] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.017594] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023854] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023902] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.024926] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.022928] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +2.047819] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +4.031665] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +8.255342] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[Nov23 08:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[ +32.253523] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	
	
	==> etcd [a339ac88f7af5a9896486bb45f399333ade0e8e9ec5e54499cb200425c96a0d2] <==
	{"level":"warn","ts":"2025-11-23T08:27:39.166576Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.173513Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.179527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.185749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.194663Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.201006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.208029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.215246Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.221265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.227594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.234068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.240489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.246538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.252639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.258759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.264817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.271959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.291507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.298749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.305889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:27:39.348786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52334","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:28:20.092480Z","caller":"traceutil/trace.go:172","msg":"trace[408889537] transaction","detail":"{read_only:false; response_revision:688; number_of_response:1; }","duration":"119.173591ms","start":"2025-11-23T08:28:19.973280Z","end":"2025-11-23T08:28:20.092454Z","steps":["trace[408889537] 'process raft request'  (duration: 119.028292ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T08:37:38.850203Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1147}
	{"level":"info","ts":"2025-11-23T08:37:38.869411Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1147,"took":"18.882324ms","hash":3817659214,"current-db-size-bytes":3481600,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1658880,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-11-23T08:37:38.869460Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3817659214,"revision":1147,"compact-revision":-1}
	
	
	==> etcd [f5bb711a72f814ff336c5487211ed7722671642139b243e633ff45bf9a24475e] <==
	{"level":"warn","ts":"2025-11-23T08:26:17.589633Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:26:17.596567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:26:17.604677Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:26:17.616002Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:26:17.622172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:26:17.628448Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T08:26:17.669881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55968","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T08:27:16.784518Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-23T08:27:16.784607Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-709702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-23T08:27:16.784729Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:27:16.786675Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-23T08:27:16.786731Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:27:16.786753Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-23T08:27:16.786825Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-11-23T08:27:16.786843Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-23T08:27:16.786869Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:27:16.786894Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:27:16.786912Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-23T08:27:16.786826Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-23T08:27:16.786939Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-23T08:27:16.786949Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:27:16.788434Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-23T08:27:16.788495Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-23T08:27:16.788525Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-23T08:27:16.788533Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-709702","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:38:20 up  1:20,  0 user,  load average: 0.03, 0.20, 0.50
	Linux functional-709702 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [215a119be6149c4ce170bee1a90a7e239ef2f3fe76d753f59ece38aaba8ee41d] <==
	I1123 08:26:26.846761       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 08:26:26.847041       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1123 08:26:26.847164       1 main.go:148] setting mtu 1500 for CNI 
	I1123 08:26:26.847177       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 08:26:26.847198       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T08:26:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 08:26:27.139911       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 08:26:27.139952       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 08:26:27.139995       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 08:26:27.140469       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 08:26:27.540544       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 08:26:27.540579       1 metrics.go:72] Registering metrics
	I1123 08:26:27.540783       1 controller.go:711] "Syncing nftables rules"
	I1123 08:26:37.140568       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:26:37.140631       1 main.go:301] handling current node
	I1123 08:26:47.140323       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:26:47.140359       1 main.go:301] handling current node
	I1123 08:26:57.140620       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:26:57.140652       1 main.go:301] handling current node
	
	
	==> kindnet [d4b57f61091c4b459f2ddbcdb3696e04f69cf1cc7107db325ae4137b49527da1] <==
	I1123 08:36:16.973999       1 main.go:301] handling current node
	I1123 08:36:26.977079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:36:26.977143       1 main.go:301] handling current node
	I1123 08:36:36.974664       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:36:36.974702       1 main.go:301] handling current node
	I1123 08:36:46.978117       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:36:46.978153       1 main.go:301] handling current node
	I1123 08:36:56.977003       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:36:56.977043       1 main.go:301] handling current node
	I1123 08:37:06.973645       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:37:06.973684       1 main.go:301] handling current node
	I1123 08:37:16.973622       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:37:16.973659       1 main.go:301] handling current node
	I1123 08:37:26.976527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:37:26.976576       1 main.go:301] handling current node
	I1123 08:37:36.975776       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:37:36.975817       1 main.go:301] handling current node
	I1123 08:37:46.977047       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:37:46.977096       1 main.go:301] handling current node
	I1123 08:37:56.977104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:37:56.977135       1 main.go:301] handling current node
	I1123 08:38:06.977362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:38:06.977401       1 main.go:301] handling current node
	I1123 08:38:16.974961       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1123 08:38:16.975048       1 main.go:301] handling current node
	
	
	==> kube-apiserver [11d1506e841493bafb60b57135eae3a733c3bdca88f3f73ad383139aff08de1f] <==
	I1123 08:27:39.828087       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:27:39.828171       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 08:27:40.231378       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 08:27:40.703040       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1123 08:27:40.908289       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1123 08:27:40.909608       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 08:27:40.913434       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 08:27:41.481678       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 08:27:41.571075       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 08:27:41.617647       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 08:27:41.622716       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 08:27:43.545448       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 08:28:05.594984       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.6.210"}
	I1123 08:28:11.134440       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.111.91.224"}
	I1123 08:28:11.648462       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.103.116.47"}
	I1123 08:28:18.750819       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.110.169.121"}
	I1123 08:28:20.473407       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.243.43"}
	E1123 08:28:27.780400       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49794: use of closed network connection
	E1123 08:28:29.158722       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49816: use of closed network connection
	E1123 08:28:30.457625       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49834: use of closed network connection
	E1123 08:28:41.607084       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:43066: use of closed network connection
	I1123 08:28:42.945914       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 08:28:43.051260       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.164.83"}
	I1123 08:28:43.070987       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.147.141"}
	I1123 08:37:39.721426       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [a55bff35bb726694c2871e0493031fd523c3c2f282992755f28da9232ae98eb7] <==
	I1123 08:27:43.141656       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 08:27:43.141714       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:27:43.141745       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 08:27:43.141796       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:27:43.142009       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 08:27:43.142178       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 08:27:43.143142       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:27:43.145094       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 08:27:43.145149       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 08:27:43.147349       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 08:27:43.147439       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:27:43.149604       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:27:43.149655       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:27:43.151826       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 08:27:43.157045       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:27:43.157059       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 08:27:43.157066       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 08:27:43.162224       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1123 08:27:43.163448       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1123 08:28:42.995826       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:28:42.999013       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:28:43.000900       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:28:43.002359       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:28:43.003990       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1123 08:28:43.008666       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e65c83bb9cce1ee9ed48b7dd38aadfa12194011fa9bd0f600678e5146ce4d611] <==
	I1123 08:26:25.069107       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 08:26:25.069178       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 08:26:25.069210       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 08:26:25.069222       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 08:26:25.069356       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 08:26:25.069443       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 08:26:25.070106       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 08:26:25.071405       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 08:26:25.071425       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 08:26:25.073645       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 08:26:25.073727       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 08:26:25.073762       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 08:26:25.073769       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 08:26:25.073773       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 08:26:25.073860       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 08:26:25.076913       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 08:26:25.077000       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 08:26:25.080226       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="functional-709702" podCIDRs=["10.244.0.0/24"]
	I1123 08:26:25.082272       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 08:26:25.087507       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 08:26:25.087602       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 08:26:25.087684       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-709702"
	I1123 08:26:25.087752       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 08:26:25.094139       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 08:26:40.088880       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [3925fc1354683da04ea4a71e2b1d4a2ffb58a050e15d3b8f36769ebd4920e419] <==
	I1123 08:26:26.722673       1 server_linux.go:53] "Using iptables proxy"
	I1123 08:26:26.797456       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 08:26:26.897929       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:26:26.897983       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:26:26.898066       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:26:26.919292       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:26:26.919349       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:26:26.925325       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:26:26.925657       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:26:26.925697       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:26:26.927199       1 config.go:200] "Starting service config controller"
	I1123 08:26:26.927234       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:26:26.927334       1 config.go:309] "Starting node config controller"
	I1123 08:26:26.927353       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:26:26.927365       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:26:26.927333       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:26:26.927379       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:26:26.927418       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:26:26.927424       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:26:27.027482       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 08:26:27.027518       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:26:27.027482       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [a4cb2fabc0d7a6f9c203d83aca312acadfdd024d16b44b006c1d1033e42aa3e8] <==
	E1123 08:27:06.689871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-709702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:27:07.738992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-709702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:27:10.595310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-709702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:27:15.437357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-709702&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:27:32.680262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-709702&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1123 08:27:57.889586       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 08:27:57.889618       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1123 08:27:57.889703       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 08:27:57.908925       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 08:27:57.908989       1 server_linux.go:132] "Using iptables Proxier"
	I1123 08:27:57.914314       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 08:27:57.914624       1 server.go:527] "Version info" version="v1.34.1"
	I1123 08:27:57.914657       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:27:57.915948       1 config.go:200] "Starting service config controller"
	I1123 08:27:57.915988       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 08:27:57.916187       1 config.go:309] "Starting node config controller"
	I1123 08:27:57.916202       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 08:27:57.916210       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 08:27:57.916679       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 08:27:57.916783       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 08:27:57.916851       1 config.go:106] "Starting endpoint slice config controller"
	I1123 08:27:57.916866       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 08:27:58.016173       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 08:27:58.017350       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 08:27:58.017400       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ec738b7f648523e29be41722c11daf57fd57674b7062b39b973cc94687d03d0] <==
	I1123 08:27:38.757859       1 serving.go:386] Generated self-signed cert in-memory
	I1123 08:27:39.761409       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 08:27:39.761434       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 08:27:39.765159       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 08:27:39.765191       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 08:27:39.765196       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:27:39.765214       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:27:39.765217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:27:39.765239       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:27:39.765548       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 08:27:39.765902       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 08:27:39.865727       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 08:27:39.865792       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 08:27:39.865803       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [da4286c0105b16a956940c6a8011b09bc076995e5afbb77b871d89986b2cd41b] <==
	E1123 08:26:18.083353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 08:26:18.083359       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:26:18.083315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 08:26:18.083396       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:26:18.083430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:26:18.083430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 08:26:18.083466       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 08:26:18.083501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 08:26:18.083523       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:26:18.943178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 08:26:18.999911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 08:26:19.010266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 08:26:19.114510       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 08:26:19.132717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 08:26:19.167824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 08:26:19.208210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 08:26:19.219121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 08:26:19.242303       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1123 08:26:19.680653       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:27:16.564528       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 08:27:16.564559       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1123 08:27:16.564626       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1123 08:27:16.564672       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1123 08:27:16.564681       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1123 08:27:16.564711       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Nov 23 08:35:45 functional-709702 kubelet[4168]: E1123 08:35:45.130271    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:35:48 functional-709702 kubelet[4168]: E1123 08:35:48.130726    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:35:57 functional-709702 kubelet[4168]: E1123 08:35:57.130130    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:36:02 functional-709702 kubelet[4168]: E1123 08:36:02.130476    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:36:08 functional-709702 kubelet[4168]: E1123 08:36:08.129729    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:36:16 functional-709702 kubelet[4168]: E1123 08:36:16.130909    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:36:20 functional-709702 kubelet[4168]: E1123 08:36:20.130502    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:36:28 functional-709702 kubelet[4168]: E1123 08:36:28.130552    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:36:35 functional-709702 kubelet[4168]: E1123 08:36:35.130249    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:36:40 functional-709702 kubelet[4168]: E1123 08:36:40.130029    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:36:49 functional-709702 kubelet[4168]: E1123 08:36:49.130836    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:36:54 functional-709702 kubelet[4168]: E1123 08:36:54.130168    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:37:04 functional-709702 kubelet[4168]: E1123 08:37:04.129729    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:37:05 functional-709702 kubelet[4168]: E1123 08:37:05.130753    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:37:17 functional-709702 kubelet[4168]: E1123 08:37:17.130632    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:37:17 functional-709702 kubelet[4168]: E1123 08:37:17.130753    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:37:29 functional-709702 kubelet[4168]: E1123 08:37:29.130825    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:37:31 functional-709702 kubelet[4168]: E1123 08:37:31.130197    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:37:42 functional-709702 kubelet[4168]: E1123 08:37:42.129952    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:37:42 functional-709702 kubelet[4168]: E1123 08:37:42.129952    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:37:53 functional-709702 kubelet[4168]: E1123 08:37:53.130068    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:37:54 functional-709702 kubelet[4168]: E1123 08:37:54.130003    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:38:08 functional-709702 kubelet[4168]: E1123 08:38:08.130244    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	Nov 23 08:38:09 functional-709702 kubelet[4168]: E1123 08:38:09.131304    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-hg4pq" podUID="cb2c8556-d357-4528-b8a5-a2f68afa6d08"
	Nov 23 08:38:20 functional-709702 kubelet[4168]: E1123 08:38:20.130875    4168 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-7c655" podUID="286a1604-e9e0-40f6-a37b-c01087b916d0"
	
	
	==> kubernetes-dashboard [91f6d861d05faec2d699633d9dbe4b57fb238a6f5fe88f93187741e151f01491] <==
	2025/11/23 08:28:48 Starting overwatch
	2025/11/23 08:28:48 Using namespace: kubernetes-dashboard
	2025/11/23 08:28:48 Using in-cluster config to connect to apiserver
	2025/11/23 08:28:48 Using secret token for csrf signing
	2025/11/23 08:28:48 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 08:28:48 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 08:28:48 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 08:28:48 Generating JWE encryption key
	2025/11/23 08:28:48 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 08:28:48 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 08:28:48 Initializing JWE encryption key from synchronized object
	2025/11/23 08:28:48 Creating in-cluster Sidecar client
	2025/11/23 08:28:48 Successful request to sidecar
	2025/11/23 08:28:48 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [c421fa41774079f51712d58ffaa2be79172dcaaab0ff83402ba8477201747909] <==
	I1123 08:27:06.588238       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 08:27:06.591572       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [c6c51a29f05e2332d4fdb31c475b460fdc60d7ad06c9c0474c52cbf1e4580ac9] <==
	W1123 08:37:56.310261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:37:58.313836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:37:58.317486       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:00.320293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:00.325151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:02.328260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:02.331984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:04.334919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:04.339817       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:06.342863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:06.346357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:08.349074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:08.352784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:10.355101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:10.359824       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:12.363011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:12.366668       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:14.369741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:14.373255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:16.376190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:16.379732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:18.382646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:18.387161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:20.390376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 08:38:20.395010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-709702 -n functional-709702
helpers_test.go:269: (dbg) Run:  kubectl --context functional-709702 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-hg4pq hello-node-connect-7d85dfc575-7c655
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-709702 describe pod busybox-mount hello-node-75c85bcc94-hg4pq hello-node-connect-7d85dfc575-7c655
helpers_test.go:290: (dbg) kubectl --context functional-709702 describe pod busybox-mount hello-node-75c85bcc94-hg4pq hello-node-connect-7d85dfc575-7c655:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-709702/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:28:31 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  cri-o://9eddb7fc81932feecd834db7f440229cfa90fa31fb96bd2bbd7b95ac6241255c
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 23 Nov 2025 08:28:33 +0000
	      Finished:     Sun, 23 Nov 2025 08:28:33 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-6rfxh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-6rfxh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m50s  default-scheduler  Successfully assigned default/busybox-mount to functional-709702
	  Normal  Pulling    9m50s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m48s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.117s (2.117s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m48s  kubelet            Created container: mount-munger
	  Normal  Started    9m48s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-hg4pq
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-709702/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:28:20 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpq8s (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rpq8s:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hg4pq to functional-709702
	  Normal   Pulling    6m54s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m54s (x5 over 9m58s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m54s (x5 over 9m58s)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m48s (x20 over 9m57s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m34s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
	
	
	Name:             hello-node-connect-7d85dfc575-7c655
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-709702/192.168.49.2
	Start Time:       Sun, 23 Nov 2025 08:28:18 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.7
	IPs:
	  IP:           10.244.0.7
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wtfrd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-wtfrd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-7c655 to functional-709702
	  Normal   Pulling    7m7s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 9m58s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 9m58s)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m46s (x21 over 9m57s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m46s (x21 over 9m57s)  kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image load --daemon kicbase/echo-server:functional-709702 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-709702" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image load --daemon kicbase/echo-server:functional-709702 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-709702" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-709702
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image load --daemon kicbase/echo-server:functional-709702 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-709702" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image save kicbase/echo-server:functional-709702 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1123 08:28:15.688514  141500 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:28:15.688801  141500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:15.688813  141500 out.go:374] Setting ErrFile to fd 2...
	I1123 08:28:15.688819  141500 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:15.689034  141500 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:28:15.689693  141500 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:28:15.689847  141500 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:28:15.690411  141500 cli_runner.go:164] Run: docker container inspect functional-709702 --format={{.State.Status}}
	I1123 08:28:15.712931  141500 ssh_runner.go:195] Run: systemctl --version
	I1123 08:28:15.713010  141500 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-709702
	I1123 08:28:15.742396  141500 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/functional-709702/id_rsa Username:docker}
	I1123 08:28:15.858644  141500 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1123 08:28:15.858869  141500 cache_images.go:255] Failed to load cached images for "functional-709702": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1123 08:28:15.858928  141500 cache_images.go:267] failed pushing to: functional-709702

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-709702
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image save --daemon kicbase/echo-server:functional-709702 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-709702
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-709702: exit status 1 (23.455902ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-709702

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-709702

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-709702 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-709702 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-hg4pq" [cb2c8556-d357-4528-b8a5-a2f68afa6d08] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-709702 -n functional-709702
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-23 08:38:20.83730786 +0000 UTC m=+1119.935668671
functional_test.go:1460: (dbg) Run:  kubectl --context functional-709702 describe po hello-node-75c85bcc94-hg4pq -n default
functional_test.go:1460: (dbg) kubectl --context functional-709702 describe po hello-node-75c85bcc94-hg4pq -n default:
Name:             hello-node-75c85bcc94-hg4pq
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-709702/192.168.49.2
Start Time:       Sun, 23 Nov 2025 08:28:20 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rpq8s (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-rpq8s:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-hg4pq to functional-709702
Normal   Pulling    6m53s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m53s (x5 over 9m57s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m53s (x5 over 9m57s)   kubelet            Error: ErrImagePull
Warning  Failed     4m47s (x20 over 9m56s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m33s (x21 over 9m56s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-709702 logs hello-node-75c85bcc94-hg4pq -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-709702 logs hello-node-75c85bcc94-hg4pq -n default: exit status 1 (68.121503ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-hg4pq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-709702 logs hello-node-75c85bcc94-hg4pq -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 service --namespace=default --https --url hello-node: exit status 115 (544.876166ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32249
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-709702 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 service hello-node --url --format={{.IP}}: exit status 115 (546.545025ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-709702 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 service hello-node --url: exit status 115 (540.471671ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32249
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-709702 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32249
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.38s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-430618 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-430618 --output=json --user=testUser: exit status 80 (2.383068693s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"10faf3b7-4971-43ab-9028-1e923f91a7e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-430618 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"fbf78952-604c-4d5a-b0fc-a5d8ff6b7600","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T08:47:10Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"e1864332-3c5e-4864-93e4-7983644ad562","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-430618 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.38s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-430618 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-430618 --output=json --user=testUser: exit status 80 (1.450024894s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"910e0fa9-ab84-4f8a-af3d-7ae89802f32c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-430618 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"af72bf7e-a89d-4448-8e56-f716b3e3bd30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-23T08:47:12Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"c4a7e8a0-8356-4cc0-af3d-cc7c5caec113","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-430618 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.45s)

                                                
                                    
x
+
TestPause/serial/Pause (5.97s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-397202 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-397202 --alsologtostderr -v=5: exit status 80 (2.354334246s)

                                                
                                                
-- stdout --
	* Pausing node pause-397202 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:02:47.263742  319404 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:02:47.263867  319404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:02:47.263878  319404 out.go:374] Setting ErrFile to fd 2...
	I1123 09:02:47.263882  319404 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:02:47.264187  319404 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:02:47.264514  319404 out.go:368] Setting JSON to false
	I1123 09:02:47.264538  319404 mustload.go:66] Loading cluster: pause-397202
	I1123 09:02:47.265017  319404 config.go:182] Loaded profile config "pause-397202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:02:47.265435  319404 cli_runner.go:164] Run: docker container inspect pause-397202 --format={{.State.Status}}
	I1123 09:02:47.284440  319404 host.go:66] Checking if "pause-397202" exists ...
	I1123 09:02:47.284756  319404 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:02:47.341001  319404 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-23 09:02:47.330581356 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:02:47.341634  319404 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-397202 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:02:47.343755  319404 out.go:179] * Pausing node pause-397202 ... 
	I1123 09:02:47.344900  319404 host.go:66] Checking if "pause-397202" exists ...
	I1123 09:02:47.345206  319404 ssh_runner.go:195] Run: systemctl --version
	I1123 09:02:47.345253  319404 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:47.363039  319404 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:47.463657  319404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:47.476235  319404 pause.go:52] kubelet running: true
	I1123 09:02:47.476333  319404 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:02:47.605770  319404 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:02:47.605863  319404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:02:47.677263  319404 cri.go:89] found id: "ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536"
	I1123 09:02:47.677289  319404 cri.go:89] found id: "365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66"
	I1123 09:02:47.677295  319404 cri.go:89] found id: "f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0"
	I1123 09:02:47.677300  319404 cri.go:89] found id: "a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85"
	I1123 09:02:47.677305  319404 cri.go:89] found id: "f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4"
	I1123 09:02:47.677310  319404 cri.go:89] found id: "10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37"
	I1123 09:02:47.677314  319404 cri.go:89] found id: "f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0"
	I1123 09:02:47.677318  319404 cri.go:89] found id: ""
	I1123 09:02:47.677393  319404 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:02:47.690001  319404 retry.go:31] will retry after 285.95467ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:02:47Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:02:47.976574  319404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:47.989916  319404 pause.go:52] kubelet running: false
	I1123 09:02:47.989991  319404 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:02:48.105043  319404 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:02:48.105144  319404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:02:48.170887  319404 cri.go:89] found id: "ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536"
	I1123 09:02:48.170918  319404 cri.go:89] found id: "365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66"
	I1123 09:02:48.170924  319404 cri.go:89] found id: "f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0"
	I1123 09:02:48.170930  319404 cri.go:89] found id: "a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85"
	I1123 09:02:48.170935  319404 cri.go:89] found id: "f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4"
	I1123 09:02:48.170940  319404 cri.go:89] found id: "10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37"
	I1123 09:02:48.170944  319404 cri.go:89] found id: "f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0"
	I1123 09:02:48.170948  319404 cri.go:89] found id: ""
	I1123 09:02:48.171022  319404 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:02:48.182761  319404 retry.go:31] will retry after 538.389809ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:02:48Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:02:48.721467  319404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:48.735208  319404 pause.go:52] kubelet running: false
	I1123 09:02:48.735283  319404 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:02:48.843408  319404 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:02:48.843491  319404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:02:48.909282  319404 cri.go:89] found id: "ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536"
	I1123 09:02:48.909304  319404 cri.go:89] found id: "365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66"
	I1123 09:02:48.909308  319404 cri.go:89] found id: "f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0"
	I1123 09:02:48.909311  319404 cri.go:89] found id: "a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85"
	I1123 09:02:48.909314  319404 cri.go:89] found id: "f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4"
	I1123 09:02:48.909317  319404 cri.go:89] found id: "10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37"
	I1123 09:02:48.909320  319404 cri.go:89] found id: "f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0"
	I1123 09:02:48.909324  319404 cri.go:89] found id: ""
	I1123 09:02:48.909369  319404 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:02:48.921862  319404 retry.go:31] will retry after 425.252551ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:02:48Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:02:49.347500  319404 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:49.360954  319404 pause.go:52] kubelet running: false
	I1123 09:02:49.361030  319404 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:02:49.472737  319404 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:02:49.472835  319404 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:02:49.537112  319404 cri.go:89] found id: "ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536"
	I1123 09:02:49.537144  319404 cri.go:89] found id: "365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66"
	I1123 09:02:49.537151  319404 cri.go:89] found id: "f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0"
	I1123 09:02:49.537155  319404 cri.go:89] found id: "a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85"
	I1123 09:02:49.537160  319404 cri.go:89] found id: "f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4"
	I1123 09:02:49.537164  319404 cri.go:89] found id: "10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37"
	I1123 09:02:49.537167  319404 cri.go:89] found id: "f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0"
	I1123 09:02:49.537172  319404 cri.go:89] found id: ""
	I1123 09:02:49.537223  319404 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:02:49.551065  319404 out.go:203] 
	W1123 09:02:49.552449  319404 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:02:49Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:02:49Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:02:49.552465  319404 out.go:285] * 
	* 
	W1123 09:02:49.556549  319404 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:02:49.557667  319404 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-397202 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-397202
helpers_test.go:243: (dbg) docker inspect pause-397202:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48",
	        "Created": "2025-11-23T09:02:02.481657818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:02:02.520175353Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/hosts",
	        "LogPath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48-json.log",
	        "Name": "/pause-397202",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-397202:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-397202",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48",
	                "LowerDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-397202",
	                "Source": "/var/lib/docker/volumes/pause-397202/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-397202",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-397202",
	                "name.minikube.sigs.k8s.io": "pause-397202",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "701c2bb6fb4ab37b03e49222984024f6b5d0e4c8b0e2032d68933a021ce06edf",
	            "SandboxKey": "/var/run/docker/netns/701c2bb6fb4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-397202": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3799106e8b741ffca9315403c00f96db99df0f304bc861c94077c9c95bb62b3d",
	                    "EndpointID": "a1972208148e7b90a3b28a4bea475bec1d808fc49b6ca9c94725c3362c433c0c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:cb:19:b4:d0:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-397202",
	                        "53d107e61b0a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-397202 -n pause-397202
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-397202 -n pause-397202: exit status 2 (336.021473ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-397202 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cilium-741183                                                                                                                         │ cilium-741183             │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-064370 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ stop    │ -p kubernetes-upgrade-064370                                                                                                             │ kubernetes-upgrade-064370 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-064370 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ delete  │ -p offline-crio-228886                                                                                                                   │ offline-crio-228886       │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p running-upgrade-760153 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-760153    │ jenkins │ v1.32.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:01 UTC │
	│ stop    │ stopped-upgrade-248610 stop                                                                                                              │ stopped-upgrade-248610    │ jenkins │ v1.32.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p missing-upgrade-265184 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-265184    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p stopped-upgrade-248610 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-248610    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p running-upgrade-760153 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-760153    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p stopped-upgrade-248610                                                                                                                │ stopped-upgrade-248610    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p force-systemd-flag-786725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-786725 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p running-upgrade-760153                                                                                                                │ running-upgrade-760153    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p force-systemd-env-696878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-696878  │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ ssh     │ force-systemd-flag-786725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                     │ force-systemd-flag-786725 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p force-systemd-flag-786725                                                                                                             │ force-systemd-flag-786725 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p missing-upgrade-265184                                                                                                                │ missing-upgrade-265184    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p pause-397202 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-397202              │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p cert-expiration-723349 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                   │ cert-expiration-723349    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p force-systemd-env-696878                                                                                                              │ force-systemd-env-696878  │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p NoKubernetes-457254 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-457254       │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │                     │
	│ start   │ -p NoKubernetes-457254 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-457254       │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p NoKubernetes-457254 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-457254       │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │                     │
	│ start   │ -p pause-397202 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-397202              │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ pause   │ -p pause-397202 --alsologtostderr -v=5                                                                                                   │ pause-397202              │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:02:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:02:41.425406  318073 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:02:41.425676  318073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:02:41.425687  318073 out.go:374] Setting ErrFile to fd 2...
	I1123 09:02:41.425694  318073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:02:41.425927  318073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:02:41.426372  318073 out.go:368] Setting JSON to false
	I1123 09:02:41.427561  318073 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6301,"bootTime":1763882260,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:02:41.427618  318073 start.go:143] virtualization: kvm guest
	I1123 09:02:41.429602  318073 out.go:179] * [pause-397202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:02:41.430682  318073 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:02:41.430686  318073 notify.go:221] Checking for updates...
	I1123 09:02:41.431957  318073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:02:41.433206  318073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:02:41.434336  318073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:02:41.435378  318073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:02:41.436425  318073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:02:41.437947  318073 config.go:182] Loaded profile config "pause-397202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:02:41.438502  318073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:02:41.462129  318073 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:02:41.462234  318073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:02:41.522384  318073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-23 09:02:41.510807423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:02:41.522499  318073 docker.go:319] overlay module found
	I1123 09:02:41.526083  318073 out.go:179] * Using the docker driver based on existing profile
	I1123 09:02:41.527270  318073 start.go:309] selected driver: docker
	I1123 09:02:41.527288  318073 start.go:927] validating driver "docker" against &{Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:02:41.527380  318073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:02:41.527448  318073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:02:41.589229  318073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-23 09:02:41.577546643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:02:41.589887  318073 cni.go:84] Creating CNI manager for ""
	I1123 09:02:41.589990  318073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:02:41.590076  318073 start.go:353] cluster config:
	{Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:02:41.591897  318073 out.go:179] * Starting "pause-397202" primary control-plane node in "pause-397202" cluster
	I1123 09:02:41.593100  318073 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:02:41.594235  318073 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:02:41.595333  318073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:02:41.595369  318073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:02:41.595385  318073 cache.go:65] Caching tarball of preloaded images
	I1123 09:02:41.595420  318073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:02:41.595471  318073 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:02:41.595492  318073 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:02:41.595635  318073 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/config.json ...
	I1123 09:02:41.619453  318073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:02:41.619474  318073 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:02:41.619488  318073 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:02:41.619524  318073 start.go:360] acquireMachinesLock for pause-397202: {Name:mk86d460701ca2570c9c98015bd63118b40a5ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:02:41.619583  318073 start.go:364] duration metric: took 40.336µs to acquireMachinesLock for "pause-397202"
	I1123 09:02:41.619599  318073 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:02:41.619608  318073 fix.go:54] fixHost starting: 
	I1123 09:02:41.619823  318073 cli_runner.go:164] Run: docker container inspect pause-397202 --format={{.State.Status}}
	I1123 09:02:41.641456  318073 fix.go:112] recreateIfNeeded on pause-397202: state=Running err=<nil>
	W1123 09:02:41.641496  318073 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:02:40.130869  317334 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1123 09:02:40.161700  317334 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1123 09:02:40.161778  317334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:02:40.190007  317334 cri.go:89] found id: "8653fd007ce583a2d825eb177fdef0cce573312f336809a2c9ce21ec4787bdf8"
	I1123 09:02:40.190035  317334 cri.go:89] found id: "e03d60227209ee0a10353ceee3143cd3a825f70fbe920c9b6a144db4991ee676"
	I1123 09:02:40.190040  317334 cri.go:89] found id: "f1b9fa1dd04a10f21f27a858f15713e3827efdf9ddb6e87ae16648c562ab8894"
	I1123 09:02:40.190044  317334 cri.go:89] found id: "4bf24e6b47b0cec1bfec2975525a89c97ba7b454c63c75a198832221c2ee9e14"
	I1123 09:02:40.190047  317334 cri.go:89] found id: ""
	W1123 09:02:40.190055  317334 kubeadm.go:839] found 4 kube-system containers to stop
	I1123 09:02:40.190061  317334 cri.go:252] Stopping containers: [8653fd007ce583a2d825eb177fdef0cce573312f336809a2c9ce21ec4787bdf8 e03d60227209ee0a10353ceee3143cd3a825f70fbe920c9b6a144db4991ee676 f1b9fa1dd04a10f21f27a858f15713e3827efdf9ddb6e87ae16648c562ab8894 4bf24e6b47b0cec1bfec2975525a89c97ba7b454c63c75a198832221c2ee9e14]
	I1123 09:02:40.190114  317334 ssh_runner.go:195] Run: which crictl
	I1123 09:02:40.194226  317334 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 8653fd007ce583a2d825eb177fdef0cce573312f336809a2c9ce21ec4787bdf8 e03d60227209ee0a10353ceee3143cd3a825f70fbe920c9b6a144db4991ee676 f1b9fa1dd04a10f21f27a858f15713e3827efdf9ddb6e87ae16648c562ab8894 4bf24e6b47b0cec1bfec2975525a89c97ba7b454c63c75a198832221c2ee9e14
	I1123 09:02:38.778478  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:02:38.778954  284685 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1123 09:02:38.779026  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 09:02:38.779080  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 09:02:38.810190  284685 cri.go:89] found id: "7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:38.810212  284685 cri.go:89] found id: ""
	I1123 09:02:38.810222  284685 logs.go:282] 1 containers: [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd]
	I1123 09:02:38.810281  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:38.814197  284685 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 09:02:38.814257  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 09:02:38.844469  284685 cri.go:89] found id: ""
	I1123 09:02:38.844499  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.844510  284685 logs.go:284] No container was found matching "etcd"
	I1123 09:02:38.844518  284685 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 09:02:38.844580  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 09:02:38.875170  284685 cri.go:89] found id: ""
	I1123 09:02:38.875192  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.875200  284685 logs.go:284] No container was found matching "coredns"
	I1123 09:02:38.875206  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 09:02:38.875260  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 09:02:38.905851  284685 cri.go:89] found id: "fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:38.905875  284685 cri.go:89] found id: ""
	I1123 09:02:38.905886  284685 logs.go:282] 1 containers: [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d]
	I1123 09:02:38.905944  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:38.910119  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 09:02:38.910180  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 09:02:38.937312  284685 cri.go:89] found id: ""
	I1123 09:02:38.937339  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.937349  284685 logs.go:284] No container was found matching "kube-proxy"
	I1123 09:02:38.937357  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 09:02:38.937420  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 09:02:38.968550  284685 cri.go:89] found id: "d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:38.968572  284685 cri.go:89] found id: ""
	I1123 09:02:38.968580  284685 logs.go:282] 1 containers: [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306]
	I1123 09:02:38.968639  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:38.972540  284685 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 09:02:38.972614  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 09:02:38.999487  284685 cri.go:89] found id: ""
	I1123 09:02:38.999511  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.999519  284685 logs.go:284] No container was found matching "kindnet"
	I1123 09:02:38.999525  284685 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 09:02:38.999585  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 09:02:39.026006  284685 cri.go:89] found id: ""
	I1123 09:02:39.026032  284685 logs.go:282] 0 containers: []
	W1123 09:02:39.026041  284685 logs.go:284] No container was found matching "storage-provisioner"
	I1123 09:02:39.026051  284685 logs.go:123] Gathering logs for describe nodes ...
	I1123 09:02:39.026064  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 09:02:39.090293  284685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 09:02:39.090313  284685 logs.go:123] Gathering logs for kube-apiserver [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd] ...
	I1123 09:02:39.090328  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:39.124773  284685 logs.go:123] Gathering logs for kube-scheduler [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d] ...
	I1123 09:02:39.124800  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:39.175221  284685 logs.go:123] Gathering logs for kube-controller-manager [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306] ...
	I1123 09:02:39.175266  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:39.203214  284685 logs.go:123] Gathering logs for CRI-O ...
	I1123 09:02:39.203256  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 09:02:39.244803  284685 logs.go:123] Gathering logs for container status ...
	I1123 09:02:39.244846  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 09:02:39.280005  284685 logs.go:123] Gathering logs for kubelet ...
	I1123 09:02:39.280038  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 09:02:39.370366  284685 logs.go:123] Gathering logs for dmesg ...
	I1123 09:02:39.370399  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 09:02:41.890398  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:02:41.890806  284685 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1123 09:02:41.890860  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 09:02:41.890910  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 09:02:41.917486  284685 cri.go:89] found id: "7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:41.917508  284685 cri.go:89] found id: ""
	I1123 09:02:41.917517  284685 logs.go:282] 1 containers: [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd]
	I1123 09:02:41.917575  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:41.921794  284685 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 09:02:41.921865  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 09:02:41.950855  284685 cri.go:89] found id: ""
	I1123 09:02:41.950884  284685 logs.go:282] 0 containers: []
	W1123 09:02:41.950903  284685 logs.go:284] No container was found matching "etcd"
	I1123 09:02:41.950909  284685 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 09:02:41.950958  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 09:02:41.980098  284685 cri.go:89] found id: ""
	I1123 09:02:41.980127  284685 logs.go:282] 0 containers: []
	W1123 09:02:41.980139  284685 logs.go:284] No container was found matching "coredns"
	I1123 09:02:41.980147  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 09:02:41.980209  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 09:02:42.009694  284685 cri.go:89] found id: "fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:42.009716  284685 cri.go:89] found id: ""
	I1123 09:02:42.009727  284685 logs.go:282] 1 containers: [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d]
	I1123 09:02:42.009786  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:42.014067  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 09:02:42.014133  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 09:02:42.042244  284685 cri.go:89] found id: ""
	I1123 09:02:42.042269  284685 logs.go:282] 0 containers: []
	W1123 09:02:42.042279  284685 logs.go:284] No container was found matching "kube-proxy"
	I1123 09:02:42.042287  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 09:02:42.042346  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 09:02:42.071725  284685 cri.go:89] found id: "d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:42.071753  284685 cri.go:89] found id: ""
	I1123 09:02:42.071765  284685 logs.go:282] 1 containers: [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306]
	I1123 09:02:42.071821  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:42.075736  284685 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 09:02:42.075789  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 09:02:42.104268  284685 cri.go:89] found id: ""
	I1123 09:02:42.104294  284685 logs.go:282] 0 containers: []
	W1123 09:02:42.104303  284685 logs.go:284] No container was found matching "kindnet"
	I1123 09:02:42.104310  284685 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 09:02:42.104370  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 09:02:42.129319  284685 cri.go:89] found id: ""
	I1123 09:02:42.129345  284685 logs.go:282] 0 containers: []
	W1123 09:02:42.129355  284685 logs.go:284] No container was found matching "storage-provisioner"
	I1123 09:02:42.129366  284685 logs.go:123] Gathering logs for container status ...
	I1123 09:02:42.129383  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 09:02:42.159589  284685 logs.go:123] Gathering logs for kubelet ...
	I1123 09:02:42.159615  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 09:02:42.255293  284685 logs.go:123] Gathering logs for dmesg ...
	I1123 09:02:42.255329  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 09:02:42.274505  284685 logs.go:123] Gathering logs for describe nodes ...
	I1123 09:02:42.274536  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 09:02:42.333270  284685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 09:02:42.333290  284685 logs.go:123] Gathering logs for kube-apiserver [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd] ...
	I1123 09:02:42.333303  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:42.365074  284685 logs.go:123] Gathering logs for kube-scheduler [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d] ...
	I1123 09:02:42.365103  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:42.410337  284685 logs.go:123] Gathering logs for kube-controller-manager [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306] ...
	I1123 09:02:42.410364  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:42.437770  284685 logs.go:123] Gathering logs for CRI-O ...
	I1123 09:02:42.437797  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 09:02:41.643172  318073 out.go:252] * Updating the running docker "pause-397202" container ...
	I1123 09:02:41.643209  318073 machine.go:94] provisionDockerMachine start ...
	I1123 09:02:41.643273  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:41.661997  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:41.662329  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:41.662347  318073 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:02:41.807434  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-397202
	
	I1123 09:02:41.807492  318073 ubuntu.go:182] provisioning hostname "pause-397202"
	I1123 09:02:41.807570  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:41.826730  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:41.826939  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:41.826952  318073 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-397202 && echo "pause-397202" | sudo tee /etc/hostname
	I1123 09:02:41.983185  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-397202
	
	I1123 09:02:41.983270  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.004930  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:42.005316  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:42.005346  318073 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-397202' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-397202/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-397202' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:02:42.154841  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:02:42.154874  318073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:02:42.154900  318073 ubuntu.go:190] setting up certificates
	I1123 09:02:42.154927  318073 provision.go:84] configureAuth start
	I1123 09:02:42.155015  318073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-397202
	I1123 09:02:42.174111  318073 provision.go:143] copyHostCerts
	I1123 09:02:42.174182  318073 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:02:42.174206  318073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:02:42.174291  318073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:02:42.174401  318073 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:02:42.174413  318073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:02:42.174452  318073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:02:42.174530  318073 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:02:42.174540  318073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:02:42.174575  318073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:02:42.174655  318073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.pause-397202 san=[127.0.0.1 192.168.94.2 localhost minikube pause-397202]
	I1123 09:02:42.266102  318073 provision.go:177] copyRemoteCerts
	I1123 09:02:42.266166  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:02:42.266216  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.286657  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:42.390049  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:02:42.407763  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:02:42.425650  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:02:42.444659  318073 provision.go:87] duration metric: took 289.712666ms to configureAuth
	I1123 09:02:42.444694  318073 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:02:42.444986  318073 config.go:182] Loaded profile config "pause-397202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:02:42.445149  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.464874  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:42.465128  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:42.465145  318073 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:02:42.801593  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:02:42.801621  318073 machine.go:97] duration metric: took 1.158404937s to provisionDockerMachine
	I1123 09:02:42.801633  318073 start.go:293] postStartSetup for "pause-397202" (driver="docker")
	I1123 09:02:42.801645  318073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:02:42.801714  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:02:42.801761  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.819627  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:42.921145  318073 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:02:42.924741  318073 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:02:42.924779  318073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:02:42.924791  318073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:02:42.924850  318073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:02:42.924977  318073 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:02:42.925161  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:02:42.932379  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:02:42.949370  318073 start.go:296] duration metric: took 147.723069ms for postStartSetup
	I1123 09:02:42.949440  318073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:02:42.949485  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.966935  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:43.066143  318073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:02:43.070914  318073 fix.go:56] duration metric: took 1.451297959s for fixHost
	I1123 09:02:43.070948  318073 start.go:83] releasing machines lock for "pause-397202", held for 1.451354677s
	I1123 09:02:43.071047  318073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-397202
	I1123 09:02:43.089276  318073 ssh_runner.go:195] Run: cat /version.json
	I1123 09:02:43.089322  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:43.089347  318073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:02:43.089411  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:43.109146  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:43.110397  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:43.261825  318073 ssh_runner.go:195] Run: systemctl --version
	I1123 09:02:43.268846  318073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:02:43.305670  318073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:02:43.310694  318073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:02:43.310769  318073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:02:43.318835  318073 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:02:43.318857  318073 start.go:496] detecting cgroup driver to use...
	I1123 09:02:43.318893  318073 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:02:43.318937  318073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:02:43.333352  318073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:02:43.346227  318073 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:02:43.346290  318073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:02:43.361716  318073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:02:43.374699  318073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:02:43.478316  318073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:02:43.581894  318073 docker.go:234] disabling docker service ...
	I1123 09:02:43.581984  318073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:02:43.596813  318073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:02:43.609860  318073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:02:43.720157  318073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:02:43.828262  318073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:02:43.841310  318073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:02:43.855418  318073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:02:43.855469  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.864391  318073 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:02:43.864449  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.873763  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.882725  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.891487  318073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:02:43.899537  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.908474  318073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.916935  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.925542  318073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:02:43.932860  318073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:02:43.940114  318073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:44.042227  318073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:02:44.231298  318073 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:02:44.231364  318073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:02:44.236186  318073 start.go:564] Will wait 60s for crictl version
	I1123 09:02:44.236250  318073 ssh_runner.go:195] Run: which crictl
	I1123 09:02:44.239919  318073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:02:44.264666  318073 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:02:44.264742  318073 ssh_runner.go:195] Run: crio --version
	I1123 09:02:44.294183  318073 ssh_runner.go:195] Run: crio --version
	I1123 09:02:44.325732  318073 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:02:44.326843  318073 cli_runner.go:164] Run: docker network inspect pause-397202 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:02:44.344872  318073 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 09:02:44.349579  318073 kubeadm.go:884] updating cluster {Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:02:44.349737  318073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:02:44.349788  318073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:02:44.380798  318073 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:02:44.380816  318073 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:02:44.380860  318073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:02:44.407733  318073 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:02:44.407755  318073 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:02:44.407763  318073 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 09:02:44.407874  318073 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-397202 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:02:44.407978  318073 ssh_runner.go:195] Run: crio config
	I1123 09:02:44.453732  318073 cni.go:84] Creating CNI manager for ""
	I1123 09:02:44.453751  318073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:02:44.453767  318073 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:02:44.453789  318073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-397202 NodeName:pause-397202 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:02:44.453952  318073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-397202"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:02:44.454048  318073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:02:44.462498  318073 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:02:44.462561  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:02:44.470535  318073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1123 09:02:44.483360  318073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:02:44.496320  318073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 09:02:44.509768  318073 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:02:44.513673  318073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:44.619089  318073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:02:44.632619  318073 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202 for IP: 192.168.94.2
	I1123 09:02:44.632638  318073 certs.go:195] generating shared ca certs ...
	I1123 09:02:44.632666  318073 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:44.632827  318073 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:02:44.632865  318073 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:02:44.632875  318073 certs.go:257] generating profile certs ...
	I1123 09:02:44.632952  318073 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.key
	I1123 09:02:44.633024  318073 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/apiserver.key.fd956988
	I1123 09:02:44.633056  318073 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/proxy-client.key
	I1123 09:02:44.633156  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:02:44.633184  318073 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:02:44.633193  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:02:44.633220  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:02:44.633244  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:02:44.633267  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:02:44.633305  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:02:44.633900  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:02:44.652888  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:02:44.672081  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:02:44.690678  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:02:44.709148  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 09:02:44.728081  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:02:44.747278  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:02:44.765761  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:02:44.784060  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:02:44.802312  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:02:44.820921  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:02:44.838887  318073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:02:44.851918  318073 ssh_runner.go:195] Run: openssl version
	I1123 09:02:44.858151  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:02:44.867594  318073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:44.871677  318073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:44.871741  318073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:44.907024  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:02:44.915559  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:02:44.924342  318073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:02:44.928242  318073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:02:44.928297  318073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:02:44.963747  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:02:44.972503  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:02:44.981511  318073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:02:44.985204  318073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:02:44.985256  318073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:02:45.023858  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:02:45.032583  318073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:02:45.037014  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:02:45.076487  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:02:45.116172  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:02:45.158565  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:02:45.198031  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:02:45.238046  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:02:45.276198  318073 kubeadm.go:401] StartCluster: {Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:02:45.276321  318073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:02:45.276405  318073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:02:45.304993  318073 cri.go:89] found id: "ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536"
	I1123 09:02:45.305021  318073 cri.go:89] found id: "365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66"
	I1123 09:02:45.305028  318073 cri.go:89] found id: "f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0"
	I1123 09:02:45.305035  318073 cri.go:89] found id: "a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85"
	I1123 09:02:45.305039  318073 cri.go:89] found id: "f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4"
	I1123 09:02:45.305045  318073 cri.go:89] found id: "10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37"
	I1123 09:02:45.305049  318073 cri.go:89] found id: "f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0"
	I1123 09:02:45.305054  318073 cri.go:89] found id: ""
	I1123 09:02:45.305103  318073 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:02:45.318346  318073 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:02:45Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:02:45.318414  318073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:02:45.326490  318073 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:02:45.326506  318073 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:02:45.326545  318073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:02:45.334252  318073 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:02:45.335318  318073 kubeconfig.go:125] found "pause-397202" server: "https://192.168.94.2:8443"
	I1123 09:02:45.336868  318073 kapi.go:59] client config for pause-397202: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.key", CAFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:02:45.337379  318073 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:02:45.337400  318073 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:02:45.337406  318073 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:02:45.337412  318073 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:02:45.337419  318073 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:02:45.337880  318073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:02:45.347110  318073 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 09:02:45.347142  318073 kubeadm.go:602] duration metric: took 20.629446ms to restartPrimaryControlPlane
	I1123 09:02:45.347152  318073 kubeadm.go:403] duration metric: took 70.967869ms to StartCluster
	I1123 09:02:45.347171  318073 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:45.347244  318073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:02:45.348449  318073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:45.348676  318073 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:02:45.348736  318073 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:02:45.348869  318073 config.go:182] Loaded profile config "pause-397202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:02:45.351393  318073 out.go:179] * Verifying Kubernetes components...
	I1123 09:02:45.351405  318073 out.go:179] * Enabled addons: 
	I1123 09:02:45.352530  318073 addons.go:530] duration metric: took 3.791615ms for enable addons: enabled=[]
	I1123 09:02:45.352565  318073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:45.460657  318073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:02:45.474419  318073 node_ready.go:35] waiting up to 6m0s for node "pause-397202" to be "Ready" ...
	I1123 09:02:45.482949  318073 node_ready.go:49] node "pause-397202" is "Ready"
	I1123 09:02:45.483004  318073 node_ready.go:38] duration metric: took 8.549199ms for node "pause-397202" to be "Ready" ...
	I1123 09:02:45.483023  318073 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:02:45.483077  318073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:45.494949  318073 api_server.go:72] duration metric: took 146.233705ms to wait for apiserver process to appear ...
	I1123 09:02:45.495026  318073 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:02:45.495055  318073 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:02:45.499170  318073 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:02:45.500156  318073 api_server.go:141] control plane version: v1.34.1
	I1123 09:02:45.500187  318073 api_server.go:131] duration metric: took 5.151419ms to wait for apiserver health ...
	I1123 09:02:45.500199  318073 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:02:45.503636  318073 system_pods.go:59] 7 kube-system pods found
	I1123 09:02:45.503664  318073 system_pods.go:61] "coredns-66bc5c9577-llbxg" [9e1f38f4-aec9-4d81-9da4-8077ab957f85] Running
	I1123 09:02:45.503672  318073 system_pods.go:61] "etcd-pause-397202" [a1b3d36c-8f20-462d-893b-f47983b73843] Running
	I1123 09:02:45.503677  318073 system_pods.go:61] "kindnet-hkxw7" [35f423d9-a900-4333-9b4c-835ffc193f45] Running
	I1123 09:02:45.503691  318073 system_pods.go:61] "kube-apiserver-pause-397202" [8c1714af-a1d9-4a7a-b7d7-8b2854751da7] Running
	I1123 09:02:45.503701  318073 system_pods.go:61] "kube-controller-manager-pause-397202" [cc3ffe30-4a58-4276-83ef-87e31c6fbcdd] Running
	I1123 09:02:45.503707  318073 system_pods.go:61] "kube-proxy-qfmgc" [887b8bcb-2b27-42a2-8854-1a7e62edef6b] Running
	I1123 09:02:45.503713  318073 system_pods.go:61] "kube-scheduler-pause-397202" [b554ebb6-788c-4a71-ba03-59696a8a1649] Running
	I1123 09:02:45.503722  318073 system_pods.go:74] duration metric: took 3.515654ms to wait for pod list to return data ...
	I1123 09:02:45.503732  318073 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:02:45.505874  318073 default_sa.go:45] found service account: "default"
	I1123 09:02:45.505899  318073 default_sa.go:55] duration metric: took 2.158632ms for default service account to be created ...
	I1123 09:02:45.505909  318073 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:02:45.508705  318073 system_pods.go:86] 7 kube-system pods found
	I1123 09:02:45.508732  318073 system_pods.go:89] "coredns-66bc5c9577-llbxg" [9e1f38f4-aec9-4d81-9da4-8077ab957f85] Running
	I1123 09:02:45.508739  318073 system_pods.go:89] "etcd-pause-397202" [a1b3d36c-8f20-462d-893b-f47983b73843] Running
	I1123 09:02:45.508744  318073 system_pods.go:89] "kindnet-hkxw7" [35f423d9-a900-4333-9b4c-835ffc193f45] Running
	I1123 09:02:45.508749  318073 system_pods.go:89] "kube-apiserver-pause-397202" [8c1714af-a1d9-4a7a-b7d7-8b2854751da7] Running
	I1123 09:02:45.508754  318073 system_pods.go:89] "kube-controller-manager-pause-397202" [cc3ffe30-4a58-4276-83ef-87e31c6fbcdd] Running
	I1123 09:02:45.508760  318073 system_pods.go:89] "kube-proxy-qfmgc" [887b8bcb-2b27-42a2-8854-1a7e62edef6b] Running
	I1123 09:02:45.508766  318073 system_pods.go:89] "kube-scheduler-pause-397202" [b554ebb6-788c-4a71-ba03-59696a8a1649] Running
	I1123 09:02:45.508775  318073 system_pods.go:126] duration metric: took 2.858706ms to wait for k8s-apps to be running ...
	I1123 09:02:45.508788  318073 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:02:45.508835  318073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:45.522854  318073 system_svc.go:56] duration metric: took 14.057855ms WaitForService to wait for kubelet
	I1123 09:02:45.522882  318073 kubeadm.go:587] duration metric: took 174.170367ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:02:45.522910  318073 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:02:45.525149  318073 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:02:45.525171  318073 node_conditions.go:123] node cpu capacity is 8
	I1123 09:02:45.525185  318073 node_conditions.go:105] duration metric: took 2.269672ms to run NodePressure ...
	I1123 09:02:45.525197  318073 start.go:242] waiting for startup goroutines ...
	I1123 09:02:45.525203  318073 start.go:247] waiting for cluster config update ...
	I1123 09:02:45.525211  318073 start.go:256] writing updated cluster config ...
	I1123 09:02:45.525467  318073 ssh_runner.go:195] Run: rm -f paused
	I1123 09:02:45.529141  318073 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:02:45.529844  318073 kapi.go:59] client config for pause-397202: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.key", CAFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:02:45.532514  318073 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.536617  318073 pod_ready.go:94] pod "coredns-66bc5c9577-llbxg" is "Ready"
	I1123 09:02:45.536639  318073 pod_ready.go:86] duration metric: took 4.101558ms for pod "coredns-66bc5c9577-llbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.538626  318073 pod_ready.go:83] waiting for pod "etcd-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.542362  318073 pod_ready.go:94] pod "etcd-pause-397202" is "Ready"
	I1123 09:02:45.542387  318073 pod_ready.go:86] duration metric: took 3.742411ms for pod "etcd-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.544199  318073 pod_ready.go:83] waiting for pod "kube-apiserver-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.548749  318073 pod_ready.go:94] pod "kube-apiserver-pause-397202" is "Ready"
	I1123 09:02:45.548773  318073 pod_ready.go:86] duration metric: took 4.557382ms for pod "kube-apiserver-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.550710  318073 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.933462  318073 pod_ready.go:94] pod "kube-controller-manager-pause-397202" is "Ready"
	I1123 09:02:45.933491  318073 pod_ready.go:86] duration metric: took 382.762779ms for pod "kube-controller-manager-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:46.133278  318073 pod_ready.go:83] waiting for pod "kube-proxy-qfmgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:46.532928  318073 pod_ready.go:94] pod "kube-proxy-qfmgc" is "Ready"
	I1123 09:02:46.532956  318073 pod_ready.go:86] duration metric: took 399.651771ms for pod "kube-proxy-qfmgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:46.733087  318073 pod_ready.go:83] waiting for pod "kube-scheduler-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:47.132932  318073 pod_ready.go:94] pod "kube-scheduler-pause-397202" is "Ready"
	I1123 09:02:47.132958  318073 pod_ready.go:86] duration metric: took 399.847159ms for pod "kube-scheduler-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:47.132992  318073 pod_ready.go:40] duration metric: took 1.603819402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:02:47.176313  318073 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:02:47.178462  318073 out.go:179] * Done! kubectl is now configured to use "pause-397202" cluster and "default" namespace by default
	I1123 09:02:44.980013  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:02:44.980403  284685 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1123 09:02:44.980453  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 09:02:44.980496  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 09:02:45.007712  284685 cri.go:89] found id: "7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:45.007733  284685 cri.go:89] found id: ""
	I1123 09:02:45.007743  284685 logs.go:282] 1 containers: [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd]
	I1123 09:02:45.007805  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:45.011838  284685 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 09:02:45.011903  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 09:02:45.039421  284685 cri.go:89] found id: ""
	I1123 09:02:45.039448  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.039462  284685 logs.go:284] No container was found matching "etcd"
	I1123 09:02:45.039469  284685 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 09:02:45.039527  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 09:02:45.066402  284685 cri.go:89] found id: ""
	I1123 09:02:45.066429  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.066438  284685 logs.go:284] No container was found matching "coredns"
	I1123 09:02:45.066446  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 09:02:45.066500  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 09:02:45.093672  284685 cri.go:89] found id: "fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:45.093696  284685 cri.go:89] found id: ""
	I1123 09:02:45.093704  284685 logs.go:282] 1 containers: [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d]
	I1123 09:02:45.093763  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:45.097613  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 09:02:45.097679  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 09:02:45.124851  284685 cri.go:89] found id: ""
	I1123 09:02:45.124873  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.124885  284685 logs.go:284] No container was found matching "kube-proxy"
	I1123 09:02:45.124891  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 09:02:45.124952  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 09:02:45.153700  284685 cri.go:89] found id: "d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:45.153721  284685 cri.go:89] found id: ""
	I1123 09:02:45.153730  284685 logs.go:282] 1 containers: [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306]
	I1123 09:02:45.153779  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:45.157778  284685 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 09:02:45.157843  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 09:02:45.185702  284685 cri.go:89] found id: ""
	I1123 09:02:45.185731  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.185742  284685 logs.go:284] No container was found matching "kindnet"
	I1123 09:02:45.185749  284685 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 09:02:45.185811  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 09:02:45.214981  284685 cri.go:89] found id: ""
	I1123 09:02:45.215012  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.215021  284685 logs.go:284] No container was found matching "storage-provisioner"
	I1123 09:02:45.215031  284685 logs.go:123] Gathering logs for kube-controller-manager [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306] ...
	I1123 09:02:45.215049  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:45.242984  284685 logs.go:123] Gathering logs for CRI-O ...
	I1123 09:02:45.243015  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 09:02:45.287828  284685 logs.go:123] Gathering logs for container status ...
	I1123 09:02:45.287859  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 09:02:45.320426  284685 logs.go:123] Gathering logs for kubelet ...
	I1123 09:02:45.320452  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 09:02:45.407753  284685 logs.go:123] Gathering logs for dmesg ...
	I1123 09:02:45.407794  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 09:02:45.425107  284685 logs.go:123] Gathering logs for describe nodes ...
	I1123 09:02:45.425134  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 09:02:45.486361  284685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 09:02:45.486380  284685 logs.go:123] Gathering logs for kube-apiserver [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd] ...
	I1123 09:02:45.486395  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:45.521325  284685 logs.go:123] Gathering logs for kube-scheduler [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d] ...
	I1123 09:02:45.521351  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:48.073993  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	
	
	==> CRI-O <==
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.1286646Z" level=info msg="RDT not available in the host system"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.128674874Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.129458981Z" level=info msg="Conmon does support the --sync option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.129475963Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.129488262Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.130199626Z" level=info msg="Conmon does support the --sync option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.130214292Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.134105481Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.134126005Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.134596968Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.135024236Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.135079551Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.226827558Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-llbxg Namespace:kube-system ID:90a598823f988fad5a7e76487f2384502f4d083c42274b42452ef472849ffc26 UID:9e1f38f4-aec9-4d81-9da4-8077ab957f85 NetNS:/var/run/netns/c92d3da4-25b7-4280-a036-2b31bd0a0a2e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000292428}] Aliases:map[]}"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.22707027Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-llbxg for CNI network kindnet (type=ptp)"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227514819Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227538125Z" level=info msg="Starting seccomp notifier watcher"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227586414Z" level=info msg="Create NRI interface"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227688972Z" level=info msg="built-in NRI default validator is disabled"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227701141Z" level=info msg="runtime interface created"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227710851Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227715599Z" level=info msg="runtime interface starting up..."
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227720732Z" level=info msg="starting plugins..."
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227731798Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.228071719Z" level=info msg="No systemd watchdog enabled"
	Nov 23 09:02:44 pause-397202 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ccc42c322c200       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   12 seconds ago      Running             coredns                   0                   90a598823f988       coredns-66bc5c9577-llbxg               kube-system
	365b3b573a77a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   23 seconds ago      Running             kube-proxy                0                   1b773a159057c       kube-proxy-qfmgc                       kube-system
	f3d24f3739abc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   23 seconds ago      Running             kindnet-cni               0                   d467a56173100       kindnet-hkxw7                          kube-system
	a028a05b2a794       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   9c3231f729b3f       kube-scheduler-pause-397202            kube-system
	f5c1bc194c3b4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   f90841af713aa       kube-apiserver-pause-397202            kube-system
	10634abd56000       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   f9a46522a2b0d       kube-controller-manager-pause-397202   kube-system
	f9b138bbbfef9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   758e4507dfb10       etcd-pause-397202                      kube-system
	
	
	==> coredns [ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51247 - 18016 "HINFO IN 8093919022219011200.8258790786039506546. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02611965s
	
	
	==> describe nodes <==
	Name:               pause-397202
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-397202
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=pause-397202
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_02_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-397202
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:02:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-397202
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                7553187a-fd51-4de3-8874-b0ec6f7b6f6b
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-llbxg                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-397202                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-hkxw7                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-pause-397202             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-397202    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-qfmgc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-pause-397202             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-397202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-397202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-397202 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-397202 event: Registered Node pause-397202 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-397202 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 82 4b 59 78 74 08 06
	[Nov23 08:13] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 73 2a 74 8f 84 08 06
	[Nov23 08:22] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.017594] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023854] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023902] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.024926] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.022928] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +2.047819] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +4.031665] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +8.255342] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[Nov23 08:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[ +32.253523] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	
	
	==> etcd [f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0] <==
	{"level":"info","ts":"2025-11-23T09:02:22.676683Z","caller":"traceutil/trace.go:172","msg":"trace[288519952] range","detail":"{range_begin:/registry/clusterroles/kindnet; range_end:; response_count:0; response_revision:267; }","duration":"127.464378ms","start":"2025-11-23T09:02:22.549160Z","end":"2025-11-23T09:02:22.676624Z","steps":["trace[288519952] 'agreement among raft nodes before linearized reading'  (duration: 126.666412ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:22.676880Z","caller":"traceutil/trace.go:172","msg":"trace[707599391] transaction","detail":"{read_only:false; response_revision:268; number_of_response:1; }","duration":"137.579912ms","start":"2025-11-23T09:02:22.539287Z","end":"2025-11-23T09:02:22.676867Z","steps":["trace[707599391] 'process raft request'  (duration: 136.598876ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:22.676754Z","caller":"traceutil/trace.go:172","msg":"trace[1765641848] transaction","detail":"{read_only:false; response_revision:269; number_of_response:1; }","duration":"136.060377ms","start":"2025-11-23T09:02:22.540683Z","end":"2025-11-23T09:02:22.676743Z","steps":["trace[1765641848] 'process raft request'  (duration: 135.989812ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:22.818065Z","caller":"traceutil/trace.go:172","msg":"trace[1370234003] linearizableReadLoop","detail":"{readStateIndex:278; appliedIndex:278; }","duration":"135.524728ms","start":"2025-11-23T09:02:22.682516Z","end":"2025-11-23T09:02:22.818041Z","steps":["trace[1370234003] 'read index received'  (duration: 135.515341ms)","trace[1370234003] 'applied index is now lower than readState.Index'  (duration: 8.155µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.028946Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"346.384271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:02:23.029039Z","caller":"traceutil/trace.go:172","msg":"trace[1892397226] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:0; response_revision:269; }","duration":"346.509757ms","start":"2025-11-23T09:02:22.682511Z","end":"2025-11-23T09:02:23.029021Z","steps":["trace[1892397226] 'agreement among raft nodes before linearized reading'  (duration: 135.616476ms)","trace[1892397226] 'range keys from in-memory index tree'  (duration: 210.735397ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.029078Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:02:22.682497Z","time spent":"346.565843ms","remote":"127.0.0.1:44280","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-23T09:02:23.030224Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.627867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361752330637 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/kindnet\" value_size:1042 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:02:23.030389Z","caller":"traceutil/trace.go:172","msg":"trace[1303714613] transaction","detail":"{read_only:false; response_revision:271; number_of_response:1; }","duration":"343.11176ms","start":"2025-11-23T09:02:22.687265Z","end":"2025-11-23T09:02:23.030377Z","steps":["trace[1303714613] 'process raft request'  (duration: 343.033489ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:23.030419Z","caller":"traceutil/trace.go:172","msg":"trace[980065289] transaction","detail":"{read_only:false; response_revision:270; number_of_response:1; }","duration":"349.593419ms","start":"2025-11-23T09:02:22.680804Z","end":"2025-11-23T09:02:23.030397Z","steps":["trace[980065289] 'process raft request'  (duration: 137.352058ms)","trace[980065289] 'compare'  (duration: 211.063448ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.030593Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:02:22.687247Z","time spent":"343.17215ms","remote":"127.0.0.1:44232","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7268,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" mod_revision:243 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" value_size:7197 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" > >"}
	{"level":"warn","ts":"2025-11-23T09:02:23.030696Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:02:22.680788Z","time spent":"349.871873ms","remote":"127.0.0.1:44598","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1080,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/clusterroles/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/kindnet\" value_size:1042 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T09:02:23.300403Z","caller":"traceutil/trace.go:172","msg":"trace[1497327376] linearizableReadLoop","detail":"{readStateIndex:283; appliedIndex:283; }","duration":"178.318571ms","start":"2025-11-23T09:02:23.122046Z","end":"2025-11-23T09:02:23.300365Z","steps":["trace[1497327376] 'read index received'  (duration: 178.307571ms)","trace[1497327376] 'applied index is now lower than readState.Index'  (duration: 9.033µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.404890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"282.818071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-23T09:02:23.404951Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.394209ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361752330652 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kindnet\" value_size:452 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:02:23.405031Z","caller":"traceutil/trace.go:172","msg":"trace[1914715188] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"290.607273ms","start":"2025-11-23T09:02:23.114409Z","end":"2025-11-23T09:02:23.405016Z","steps":["trace[1914715188] 'process raft request'  (duration: 186.083331ms)","trace[1914715188] 'compare'  (duration: 104.171404ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:02:23.405383Z","caller":"traceutil/trace.go:172","msg":"trace[2070636495] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler; range_end:; response_count:0; response_revision:274; }","duration":"282.906314ms","start":"2025-11-23T09:02:23.122041Z","end":"2025-11-23T09:02:23.404947Z","steps":["trace[2070636495] 'agreement among raft nodes before linearized reading'  (duration: 178.41807ms)","trace[2070636495] 'range keys from in-memory index tree'  (duration: 104.372222ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:02:23.406117Z","caller":"traceutil/trace.go:172","msg":"trace[442467875] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"290.322103ms","start":"2025-11-23T09:02:23.115783Z","end":"2025-11-23T09:02:23.406105Z","steps":["trace[442467875] 'process raft request'  (duration: 290.225421ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:02:23.663606Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.303032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:02:23.663684Z","caller":"traceutil/trace.go:172","msg":"trace[1267289251] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:0; response_revision:278; }","duration":"160.393459ms","start":"2025-11-23T09:02:23.503270Z","end":"2025-11-23T09:02:23.663664Z","steps":["trace[1267289251] 'agreement among raft nodes before linearized reading'  (duration: 20.663787ms)","trace[1267289251] 'range keys from in-memory index tree'  (duration: 139.604648ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.663743Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.689529ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361752330664 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" mod_revision:271 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" value_size:7407 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:02:23.663907Z","caller":"traceutil/trace.go:172","msg":"trace[1185611390] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"162.097745ms","start":"2025-11-23T09:02:23.501785Z","end":"2025-11-23T09:02:23.663883Z","steps":["trace[1185611390] 'process raft request'  (duration: 22.203301ms)","trace[1185611390] 'compare'  (duration: 139.582846ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:02:23.786013Z","caller":"traceutil/trace.go:172","msg":"trace[61419163] transaction","detail":"{read_only:false; response_revision:281; number_of_response:1; }","duration":"115.740236ms","start":"2025-11-23T09:02:23.670252Z","end":"2025-11-23T09:02:23.785992Z","steps":["trace[61419163] 'process raft request'  (duration: 113.262148ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:23.786140Z","caller":"traceutil/trace.go:172","msg":"trace[1259845641] transaction","detail":"{read_only:false; response_revision:282; number_of_response:1; }","duration":"113.500679ms","start":"2025-11-23T09:02:23.672623Z","end":"2025-11-23T09:02:23.786124Z","steps":["trace[1259845641] 'process raft request'  (duration: 113.218051ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:23.786153Z","caller":"traceutil/trace.go:172","msg":"trace[1029052553] transaction","detail":"{read_only:false; response_revision:283; number_of_response:1; }","duration":"112.387813ms","start":"2025-11-23T09:02:23.673754Z","end":"2025-11-23T09:02:23.786141Z","steps":["trace[1029052553] 'process raft request'  (duration: 112.279067ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:02:50 up  1:45,  0 user,  load average: 6.54, 3.36, 1.97
	Linux pause-397202 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0] <==
	I1123 09:02:27.192083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:02:27.192327       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 09:02:27.192479       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:02:27.192501       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:02:27.192525       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:02:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:02:27.394821       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:02:27.566062       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:02:27.566121       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:02:27.587884       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:02:27.887718       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:02:27.887766       1 metrics.go:72] Registering metrics
	I1123 09:02:27.888208       1 controller.go:711] "Syncing nftables rules"
	I1123 09:02:37.396079       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:02:37.396143       1 main.go:301] handling current node
	I1123 09:02:47.398492       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:02:47.398523       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4] <==
	E1123 09:02:18.663590       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1123 09:02:18.696360       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 09:02:18.711116       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:02:18.718130       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:18.718169       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:02:18.729739       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:18.731873       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:02:18.899884       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:02:19.512952       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:02:19.517818       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:02:19.517839       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:02:20.098183       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:02:20.150022       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:02:20.219205       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:02:20.226732       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 09:02:20.228479       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:02:20.233475       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:02:20.542291       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:02:21.365081       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:02:21.386043       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:02:21.399730       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:02:26.142944       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:26.147061       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:26.191610       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:02:26.591155       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37] <==
	I1123 09:02:25.536649       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:02:25.536766       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:02:25.536816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:02:25.536833       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:02:25.536843       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:02:25.537236       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:02:25.539440       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 09:02:25.539469       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:02:25.539931       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:02:25.541767       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:02:25.541892       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:02:25.541949       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:02:25.541962       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:02:25.541980       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:02:25.542852       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:02:25.544279       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:02:25.547449       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:02:25.547765       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-397202" podCIDRs=["10.244.0.0/24"]
	I1123 09:02:25.548732       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:02:25.549504       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:02:25.554014       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:02:25.562240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:02:25.564416       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:02:25.570895       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:02:40.479416       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66] <==
	I1123 09:02:27.017156       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:02:27.084809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:02:27.185221       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:02:27.185256       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 09:02:27.185359       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:02:27.205003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:02:27.205051       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:02:27.210226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:02:27.210613       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:02:27.210632       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:02:27.212197       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:02:27.212221       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:02:27.212254       1 config.go:200] "Starting service config controller"
	I1123 09:02:27.212260       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:02:27.212288       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:02:27.212295       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:02:27.212411       1 config.go:309] "Starting node config controller"
	I1123 09:02:27.212418       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:02:27.212425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:02:27.313391       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:02:27.313383       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:02:27.313398       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85] <==
	E1123 09:02:18.580837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:02:18.581135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:02:18.581316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:02:18.581327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:02:18.581394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:02:18.581518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:02:18.581821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:02:18.582114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:02:18.582255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:02:18.582323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:02:18.582464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:02:18.582610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:02:19.516287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:02:19.549553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:02:19.566113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:02:19.665810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:02:19.677039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:02:19.697649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:02:19.710821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:02:19.753116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:02:19.771817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:02:19.856284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:02:19.862553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:02:19.878773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1123 09:02:21.668698       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:02:22 pause-397202 kubelet[1314]: I1123 09:02:22.678505    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-397202" podStartSLOduration=1.678479531 podStartE2EDuration="1.678479531s" podCreationTimestamp="2025-11-23 09:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:22.53222513 +0000 UTC m=+1.422905565" watchObservedRunningTime="2025-11-23 09:02:22.678479531 +0000 UTC m=+1.569159950"
	Nov 23 09:02:23 pause-397202 kubelet[1314]: I1123 09:02:23.032035    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-397202" podStartSLOduration=2.03201147 podStartE2EDuration="2.03201147s" podCreationTimestamp="2025-11-23 09:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:22.678723341 +0000 UTC m=+1.569403768" watchObservedRunningTime="2025-11-23 09:02:23.03201147 +0000 UTC m=+1.922691894"
	Nov 23 09:02:23 pause-397202 kubelet[1314]: I1123 09:02:23.106327    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-397202" podStartSLOduration=2.106302468 podStartE2EDuration="2.106302468s" podCreationTimestamp="2025-11-23 09:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:23.032351813 +0000 UTC m=+1.923032215" watchObservedRunningTime="2025-11-23 09:02:23.106302468 +0000 UTC m=+1.996982887"
	Nov 23 09:02:23 pause-397202 kubelet[1314]: I1123 09:02:23.409341    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-397202" podStartSLOduration=4.409303589 podStartE2EDuration="4.409303589s" podCreationTimestamp="2025-11-23 09:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:23.10669412 +0000 UTC m=+1.997374526" watchObservedRunningTime="2025-11-23 09:02:23.409303589 +0000 UTC m=+2.299983994"
	Nov 23 09:02:25 pause-397202 kubelet[1314]: I1123 09:02:25.571564    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:02:25 pause-397202 kubelet[1314]: I1123 09:02:25.572344    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630361    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f423d9-a900-4333-9b4c-835ffc193f45-xtables-lock\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630399    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f423d9-a900-4333-9b4c-835ffc193f45-lib-modules\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630423    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/887b8bcb-2b27-42a2-8854-1a7e62edef6b-kube-proxy\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630457    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/887b8bcb-2b27-42a2-8854-1a7e62edef6b-xtables-lock\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630520    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/35f423d9-a900-4333-9b4c-835ffc193f45-cni-cfg\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630546    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94n5v\" (UniqueName: \"kubernetes.io/projected/35f423d9-a900-4333-9b4c-835ffc193f45-kube-api-access-94n5v\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630597    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59rx\" (UniqueName: \"kubernetes.io/projected/887b8bcb-2b27-42a2-8854-1a7e62edef6b-kube-api-access-w59rx\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630712    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/887b8bcb-2b27-42a2-8854-1a7e62edef6b-lib-modules\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:27 pause-397202 kubelet[1314]: I1123 09:02:27.274365    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qfmgc" podStartSLOduration=1.274340133 podStartE2EDuration="1.274340133s" podCreationTimestamp="2025-11-23 09:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:27.274226187 +0000 UTC m=+6.164906621" watchObservedRunningTime="2025-11-23 09:02:27.274340133 +0000 UTC m=+6.165020555"
	Nov 23 09:02:27 pause-397202 kubelet[1314]: I1123 09:02:27.283718    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hkxw7" podStartSLOduration=1.283694572 podStartE2EDuration="1.283694572s" podCreationTimestamp="2025-11-23 09:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:27.283499565 +0000 UTC m=+6.174179988" watchObservedRunningTime="2025-11-23 09:02:27.283694572 +0000 UTC m=+6.174374994"
	Nov 23 09:02:37 pause-397202 kubelet[1314]: I1123 09:02:37.947752    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:02:38 pause-397202 kubelet[1314]: I1123 09:02:38.020618    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n92f\" (UniqueName: \"kubernetes.io/projected/9e1f38f4-aec9-4d81-9da4-8077ab957f85-kube-api-access-4n92f\") pod \"coredns-66bc5c9577-llbxg\" (UID: \"9e1f38f4-aec9-4d81-9da4-8077ab957f85\") " pod="kube-system/coredns-66bc5c9577-llbxg"
	Nov 23 09:02:38 pause-397202 kubelet[1314]: I1123 09:02:38.020685    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1f38f4-aec9-4d81-9da4-8077ab957f85-config-volume\") pod \"coredns-66bc5c9577-llbxg\" (UID: \"9e1f38f4-aec9-4d81-9da4-8077ab957f85\") " pod="kube-system/coredns-66bc5c9577-llbxg"
	Nov 23 09:02:39 pause-397202 kubelet[1314]: I1123 09:02:39.306852    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-llbxg" podStartSLOduration=13.306829748 podStartE2EDuration="13.306829748s" podCreationTimestamp="2025-11-23 09:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:39.306432581 +0000 UTC m=+18.197113003" watchObservedRunningTime="2025-11-23 09:02:39.306829748 +0000 UTC m=+18.197510171"
	Nov 23 09:02:47 pause-397202 kubelet[1314]: I1123 09:02:47.582821    1314 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 09:02:47 pause-397202 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:02:47 pause-397202 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:02:47 pause-397202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:02:47 pause-397202 systemd[1]: kubelet.service: Consumed 1.210s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-397202 -n pause-397202
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-397202 -n pause-397202: exit status 2 (356.637333ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-397202 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-397202
helpers_test.go:243: (dbg) docker inspect pause-397202:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48",
	        "Created": "2025-11-23T09:02:02.481657818Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 308142,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:02:02.520175353Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/hostname",
	        "HostsPath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/hosts",
	        "LogPath": "/var/lib/docker/containers/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48/53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48-json.log",
	        "Name": "/pause-397202",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-397202:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-397202",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53d107e61b0ac5af02d2042b3d93838fbff8cec929cf352f3782e5385bdc4d48",
	                "LowerDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0e202f1cd6771a178f21d7c9a2d52a69b658c6fec21f540ce3cba65868199149/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-397202",
	                "Source": "/var/lib/docker/volumes/pause-397202/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-397202",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-397202",
	                "name.minikube.sigs.k8s.io": "pause-397202",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "701c2bb6fb4ab37b03e49222984024f6b5d0e4c8b0e2032d68933a021ce06edf",
	            "SandboxKey": "/var/run/docker/netns/701c2bb6fb4a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33028"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33029"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33032"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33030"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33031"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-397202": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3799106e8b741ffca9315403c00f96db99df0f304bc861c94077c9c95bb62b3d",
	                    "EndpointID": "a1972208148e7b90a3b28a4bea475bec1d808fc49b6ca9c94725c3362c433c0c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "12:cb:19:b4:d0:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-397202",
	                        "53d107e61b0a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-397202 -n pause-397202
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-397202 -n pause-397202: exit status 2 (363.268335ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-397202 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p cilium-741183                                                                                                                         │ cilium-741183             │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-064370 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ stop    │ -p kubernetes-upgrade-064370                                                                                                             │ kubernetes-upgrade-064370 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-064370 │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │                     │
	│ delete  │ -p offline-crio-228886                                                                                                                   │ offline-crio-228886       │ jenkins │ v1.37.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:00 UTC │
	│ start   │ -p running-upgrade-760153 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                     │ running-upgrade-760153    │ jenkins │ v1.32.0 │ 23 Nov 25 09:00 UTC │ 23 Nov 25 09:01 UTC │
	│ stop    │ stopped-upgrade-248610 stop                                                                                                              │ stopped-upgrade-248610    │ jenkins │ v1.32.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p missing-upgrade-265184 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ missing-upgrade-265184    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p stopped-upgrade-248610 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ stopped-upgrade-248610    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p running-upgrade-760153 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                 │ running-upgrade-760153    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p stopped-upgrade-248610                                                                                                                │ stopped-upgrade-248610    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p force-systemd-flag-786725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio              │ force-systemd-flag-786725 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p running-upgrade-760153                                                                                                                │ running-upgrade-760153    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p force-systemd-env-696878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-696878  │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ ssh     │ force-systemd-flag-786725 ssh cat /etc/crio/crio.conf.d/02-crio.conf                                                                     │ force-systemd-flag-786725 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p force-systemd-flag-786725                                                                                                             │ force-systemd-flag-786725 │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ delete  │ -p missing-upgrade-265184                                                                                                                │ missing-upgrade-265184    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:01 UTC │
	│ start   │ -p pause-397202 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-397202              │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p cert-expiration-723349 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                   │ cert-expiration-723349    │ jenkins │ v1.37.0 │ 23 Nov 25 09:01 UTC │ 23 Nov 25 09:02 UTC │
	│ delete  │ -p force-systemd-env-696878                                                                                                              │ force-systemd-env-696878  │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p NoKubernetes-457254 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                            │ NoKubernetes-457254       │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │                     │
	│ start   │ -p NoKubernetes-457254 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                    │ NoKubernetes-457254       │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ start   │ -p NoKubernetes-457254 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                    │ NoKubernetes-457254       │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │                     │
	│ start   │ -p pause-397202 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-397202              │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │ 23 Nov 25 09:02 UTC │
	│ pause   │ -p pause-397202 --alsologtostderr -v=5                                                                                                   │ pause-397202              │ jenkins │ v1.37.0 │ 23 Nov 25 09:02 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:02:41
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:02:41.425406  318073 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:02:41.425676  318073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:02:41.425687  318073 out.go:374] Setting ErrFile to fd 2...
	I1123 09:02:41.425694  318073 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:02:41.425927  318073 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:02:41.426372  318073 out.go:368] Setting JSON to false
	I1123 09:02:41.427561  318073 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6301,"bootTime":1763882260,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:02:41.427618  318073 start.go:143] virtualization: kvm guest
	I1123 09:02:41.429602  318073 out.go:179] * [pause-397202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:02:41.430682  318073 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:02:41.430686  318073 notify.go:221] Checking for updates...
	I1123 09:02:41.431957  318073 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:02:41.433206  318073 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:02:41.434336  318073 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:02:41.435378  318073 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:02:41.436425  318073 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:02:41.437947  318073 config.go:182] Loaded profile config "pause-397202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:02:41.438502  318073 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:02:41.462129  318073 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:02:41.462234  318073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:02:41.522384  318073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-23 09:02:41.510807423 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:02:41.522499  318073 docker.go:319] overlay module found
	I1123 09:02:41.526083  318073 out.go:179] * Using the docker driver based on existing profile
	I1123 09:02:41.527270  318073 start.go:309] selected driver: docker
	I1123 09:02:41.527288  318073 start.go:927] validating driver "docker" against &{Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:02:41.527380  318073 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:02:41.527448  318073 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:02:41.589229  318073 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-23 09:02:41.577546643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:02:41.589887  318073 cni.go:84] Creating CNI manager for ""
	I1123 09:02:41.589990  318073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:02:41.590076  318073 start.go:353] cluster config:
	{Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:02:41.591897  318073 out.go:179] * Starting "pause-397202" primary control-plane node in "pause-397202" cluster
	I1123 09:02:41.593100  318073 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:02:41.594235  318073 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:02:41.595333  318073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:02:41.595369  318073 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:02:41.595385  318073 cache.go:65] Caching tarball of preloaded images
	I1123 09:02:41.595420  318073 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:02:41.595471  318073 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:02:41.595492  318073 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:02:41.595635  318073 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/config.json ...
	I1123 09:02:41.619453  318073 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:02:41.619474  318073 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:02:41.619488  318073 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:02:41.619524  318073 start.go:360] acquireMachinesLock for pause-397202: {Name:mk86d460701ca2570c9c98015bd63118b40a5ef2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:02:41.619583  318073 start.go:364] duration metric: took 40.336µs to acquireMachinesLock for "pause-397202"
	I1123 09:02:41.619599  318073 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:02:41.619608  318073 fix.go:54] fixHost starting: 
	I1123 09:02:41.619823  318073 cli_runner.go:164] Run: docker container inspect pause-397202 --format={{.State.Status}}
	I1123 09:02:41.641456  318073 fix.go:112] recreateIfNeeded on pause-397202: state=Running err=<nil>
	W1123 09:02:41.641496  318073 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:02:40.130869  317334 ssh_runner.go:195] Run: sudo systemctl stop -f kubelet
	I1123 09:02:40.161700  317334 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1123 09:02:40.161778  317334 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:02:40.190007  317334 cri.go:89] found id: "8653fd007ce583a2d825eb177fdef0cce573312f336809a2c9ce21ec4787bdf8"
	I1123 09:02:40.190035  317334 cri.go:89] found id: "e03d60227209ee0a10353ceee3143cd3a825f70fbe920c9b6a144db4991ee676"
	I1123 09:02:40.190040  317334 cri.go:89] found id: "f1b9fa1dd04a10f21f27a858f15713e3827efdf9ddb6e87ae16648c562ab8894"
	I1123 09:02:40.190044  317334 cri.go:89] found id: "4bf24e6b47b0cec1bfec2975525a89c97ba7b454c63c75a198832221c2ee9e14"
	I1123 09:02:40.190047  317334 cri.go:89] found id: ""
	W1123 09:02:40.190055  317334 kubeadm.go:839] found 4 kube-system containers to stop
	I1123 09:02:40.190061  317334 cri.go:252] Stopping containers: [8653fd007ce583a2d825eb177fdef0cce573312f336809a2c9ce21ec4787bdf8 e03d60227209ee0a10353ceee3143cd3a825f70fbe920c9b6a144db4991ee676 f1b9fa1dd04a10f21f27a858f15713e3827efdf9ddb6e87ae16648c562ab8894 4bf24e6b47b0cec1bfec2975525a89c97ba7b454c63c75a198832221c2ee9e14]
	I1123 09:02:40.190114  317334 ssh_runner.go:195] Run: which crictl
	I1123 09:02:40.194226  317334 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl stop --timeout=10 8653fd007ce583a2d825eb177fdef0cce573312f336809a2c9ce21ec4787bdf8 e03d60227209ee0a10353ceee3143cd3a825f70fbe920c9b6a144db4991ee676 f1b9fa1dd04a10f21f27a858f15713e3827efdf9ddb6e87ae16648c562ab8894 4bf24e6b47b0cec1bfec2975525a89c97ba7b454c63c75a198832221c2ee9e14
	I1123 09:02:38.778478  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:02:38.778954  284685 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1123 09:02:38.779026  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 09:02:38.779080  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 09:02:38.810190  284685 cri.go:89] found id: "7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:38.810212  284685 cri.go:89] found id: ""
	I1123 09:02:38.810222  284685 logs.go:282] 1 containers: [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd]
	I1123 09:02:38.810281  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:38.814197  284685 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 09:02:38.814257  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 09:02:38.844469  284685 cri.go:89] found id: ""
	I1123 09:02:38.844499  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.844510  284685 logs.go:284] No container was found matching "etcd"
	I1123 09:02:38.844518  284685 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 09:02:38.844580  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 09:02:38.875170  284685 cri.go:89] found id: ""
	I1123 09:02:38.875192  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.875200  284685 logs.go:284] No container was found matching "coredns"
	I1123 09:02:38.875206  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 09:02:38.875260  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 09:02:38.905851  284685 cri.go:89] found id: "fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:38.905875  284685 cri.go:89] found id: ""
	I1123 09:02:38.905886  284685 logs.go:282] 1 containers: [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d]
	I1123 09:02:38.905944  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:38.910119  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 09:02:38.910180  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 09:02:38.937312  284685 cri.go:89] found id: ""
	I1123 09:02:38.937339  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.937349  284685 logs.go:284] No container was found matching "kube-proxy"
	I1123 09:02:38.937357  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 09:02:38.937420  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 09:02:38.968550  284685 cri.go:89] found id: "d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:38.968572  284685 cri.go:89] found id: ""
	I1123 09:02:38.968580  284685 logs.go:282] 1 containers: [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306]
	I1123 09:02:38.968639  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:38.972540  284685 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 09:02:38.972614  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 09:02:38.999487  284685 cri.go:89] found id: ""
	I1123 09:02:38.999511  284685 logs.go:282] 0 containers: []
	W1123 09:02:38.999519  284685 logs.go:284] No container was found matching "kindnet"
	I1123 09:02:38.999525  284685 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 09:02:38.999585  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 09:02:39.026006  284685 cri.go:89] found id: ""
	I1123 09:02:39.026032  284685 logs.go:282] 0 containers: []
	W1123 09:02:39.026041  284685 logs.go:284] No container was found matching "storage-provisioner"
	I1123 09:02:39.026051  284685 logs.go:123] Gathering logs for describe nodes ...
	I1123 09:02:39.026064  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 09:02:39.090293  284685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 09:02:39.090313  284685 logs.go:123] Gathering logs for kube-apiserver [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd] ...
	I1123 09:02:39.090328  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:39.124773  284685 logs.go:123] Gathering logs for kube-scheduler [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d] ...
	I1123 09:02:39.124800  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:39.175221  284685 logs.go:123] Gathering logs for kube-controller-manager [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306] ...
	I1123 09:02:39.175266  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:39.203214  284685 logs.go:123] Gathering logs for CRI-O ...
	I1123 09:02:39.203256  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 09:02:39.244803  284685 logs.go:123] Gathering logs for container status ...
	I1123 09:02:39.244846  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 09:02:39.280005  284685 logs.go:123] Gathering logs for kubelet ...
	I1123 09:02:39.280038  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 09:02:39.370366  284685 logs.go:123] Gathering logs for dmesg ...
	I1123 09:02:39.370399  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 09:02:41.890398  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:02:41.890806  284685 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1123 09:02:41.890860  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 09:02:41.890910  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 09:02:41.917486  284685 cri.go:89] found id: "7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:41.917508  284685 cri.go:89] found id: ""
	I1123 09:02:41.917517  284685 logs.go:282] 1 containers: [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd]
	I1123 09:02:41.917575  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:41.921794  284685 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 09:02:41.921865  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 09:02:41.950855  284685 cri.go:89] found id: ""
	I1123 09:02:41.950884  284685 logs.go:282] 0 containers: []
	W1123 09:02:41.950903  284685 logs.go:284] No container was found matching "etcd"
	I1123 09:02:41.950909  284685 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 09:02:41.950958  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 09:02:41.980098  284685 cri.go:89] found id: ""
	I1123 09:02:41.980127  284685 logs.go:282] 0 containers: []
	W1123 09:02:41.980139  284685 logs.go:284] No container was found matching "coredns"
	I1123 09:02:41.980147  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 09:02:41.980209  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 09:02:42.009694  284685 cri.go:89] found id: "fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:42.009716  284685 cri.go:89] found id: ""
	I1123 09:02:42.009727  284685 logs.go:282] 1 containers: [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d]
	I1123 09:02:42.009786  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:42.014067  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 09:02:42.014133  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 09:02:42.042244  284685 cri.go:89] found id: ""
	I1123 09:02:42.042269  284685 logs.go:282] 0 containers: []
	W1123 09:02:42.042279  284685 logs.go:284] No container was found matching "kube-proxy"
	I1123 09:02:42.042287  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 09:02:42.042346  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 09:02:42.071725  284685 cri.go:89] found id: "d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:42.071753  284685 cri.go:89] found id: ""
	I1123 09:02:42.071765  284685 logs.go:282] 1 containers: [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306]
	I1123 09:02:42.071821  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:42.075736  284685 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 09:02:42.075789  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 09:02:42.104268  284685 cri.go:89] found id: ""
	I1123 09:02:42.104294  284685 logs.go:282] 0 containers: []
	W1123 09:02:42.104303  284685 logs.go:284] No container was found matching "kindnet"
	I1123 09:02:42.104310  284685 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 09:02:42.104370  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 09:02:42.129319  284685 cri.go:89] found id: ""
	I1123 09:02:42.129345  284685 logs.go:282] 0 containers: []
	W1123 09:02:42.129355  284685 logs.go:284] No container was found matching "storage-provisioner"
	I1123 09:02:42.129366  284685 logs.go:123] Gathering logs for container status ...
	I1123 09:02:42.129383  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 09:02:42.159589  284685 logs.go:123] Gathering logs for kubelet ...
	I1123 09:02:42.159615  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 09:02:42.255293  284685 logs.go:123] Gathering logs for dmesg ...
	I1123 09:02:42.255329  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 09:02:42.274505  284685 logs.go:123] Gathering logs for describe nodes ...
	I1123 09:02:42.274536  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 09:02:42.333270  284685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 09:02:42.333290  284685 logs.go:123] Gathering logs for kube-apiserver [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd] ...
	I1123 09:02:42.333303  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:42.365074  284685 logs.go:123] Gathering logs for kube-scheduler [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d] ...
	I1123 09:02:42.365103  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:42.410337  284685 logs.go:123] Gathering logs for kube-controller-manager [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306] ...
	I1123 09:02:42.410364  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:42.437770  284685 logs.go:123] Gathering logs for CRI-O ...
	I1123 09:02:42.437797  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 09:02:41.643172  318073 out.go:252] * Updating the running docker "pause-397202" container ...
	I1123 09:02:41.643209  318073 machine.go:94] provisionDockerMachine start ...
	I1123 09:02:41.643273  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:41.661997  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:41.662329  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:41.662347  318073 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:02:41.807434  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-397202
	
	I1123 09:02:41.807492  318073 ubuntu.go:182] provisioning hostname "pause-397202"
	I1123 09:02:41.807570  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:41.826730  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:41.826939  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:41.826952  318073 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-397202 && echo "pause-397202" | sudo tee /etc/hostname
	I1123 09:02:41.983185  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-397202
	
	I1123 09:02:41.983270  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.004930  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:42.005316  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:42.005346  318073 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-397202' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-397202/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-397202' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:02:42.154841  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:02:42.154874  318073 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:02:42.154900  318073 ubuntu.go:190] setting up certificates
	I1123 09:02:42.154927  318073 provision.go:84] configureAuth start
	I1123 09:02:42.155015  318073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-397202
	I1123 09:02:42.174111  318073 provision.go:143] copyHostCerts
	I1123 09:02:42.174182  318073 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:02:42.174206  318073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:02:42.174291  318073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:02:42.174401  318073 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:02:42.174413  318073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:02:42.174452  318073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:02:42.174530  318073 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:02:42.174540  318073 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:02:42.174575  318073 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:02:42.174655  318073 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.pause-397202 san=[127.0.0.1 192.168.94.2 localhost minikube pause-397202]
	I1123 09:02:42.266102  318073 provision.go:177] copyRemoteCerts
	I1123 09:02:42.266166  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:02:42.266216  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.286657  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:42.390049  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:02:42.407763  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1123 09:02:42.425650  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:02:42.444659  318073 provision.go:87] duration metric: took 289.712666ms to configureAuth
	I1123 09:02:42.444694  318073 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:02:42.444986  318073 config.go:182] Loaded profile config "pause-397202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:02:42.445149  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.464874  318073 main.go:143] libmachine: Using SSH client type: native
	I1123 09:02:42.465128  318073 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33028 <nil> <nil>}
	I1123 09:02:42.465145  318073 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:02:42.801593  318073 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:02:42.801621  318073 machine.go:97] duration metric: took 1.158404937s to provisionDockerMachine
	I1123 09:02:42.801633  318073 start.go:293] postStartSetup for "pause-397202" (driver="docker")
	I1123 09:02:42.801645  318073 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:02:42.801714  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:02:42.801761  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.819627  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:42.921145  318073 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:02:42.924741  318073 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:02:42.924779  318073 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:02:42.924791  318073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:02:42.924850  318073 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:02:42.924977  318073 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:02:42.925161  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:02:42.932379  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:02:42.949370  318073 start.go:296] duration metric: took 147.723069ms for postStartSetup
	I1123 09:02:42.949440  318073 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:02:42.949485  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:42.966935  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:43.066143  318073 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:02:43.070914  318073 fix.go:56] duration metric: took 1.451297959s for fixHost
	I1123 09:02:43.070948  318073 start.go:83] releasing machines lock for "pause-397202", held for 1.451354677s
	I1123 09:02:43.071047  318073 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-397202
	I1123 09:02:43.089276  318073 ssh_runner.go:195] Run: cat /version.json
	I1123 09:02:43.089322  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:43.089347  318073 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:02:43.089411  318073 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-397202
	I1123 09:02:43.109146  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:43.110397  318073 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33028 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/pause-397202/id_rsa Username:docker}
	I1123 09:02:43.261825  318073 ssh_runner.go:195] Run: systemctl --version
	I1123 09:02:43.268846  318073 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:02:43.305670  318073 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:02:43.310694  318073 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:02:43.310769  318073 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:02:43.318835  318073 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:02:43.318857  318073 start.go:496] detecting cgroup driver to use...
	I1123 09:02:43.318893  318073 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:02:43.318937  318073 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:02:43.333352  318073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:02:43.346227  318073 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:02:43.346290  318073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:02:43.361716  318073 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:02:43.374699  318073 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:02:43.478316  318073 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:02:43.581894  318073 docker.go:234] disabling docker service ...
	I1123 09:02:43.581984  318073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:02:43.596813  318073 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:02:43.609860  318073 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:02:43.720157  318073 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:02:43.828262  318073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:02:43.841310  318073 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:02:43.855418  318073 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:02:43.855469  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.864391  318073 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:02:43.864449  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.873763  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.882725  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.891487  318073 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:02:43.899537  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.908474  318073 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.916935  318073 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:02:43.925542  318073 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:02:43.932860  318073 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:02:43.940114  318073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:44.042227  318073 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:02:44.231298  318073 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:02:44.231364  318073 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:02:44.236186  318073 start.go:564] Will wait 60s for crictl version
	I1123 09:02:44.236250  318073 ssh_runner.go:195] Run: which crictl
	I1123 09:02:44.239919  318073 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:02:44.264666  318073 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:02:44.264742  318073 ssh_runner.go:195] Run: crio --version
	I1123 09:02:44.294183  318073 ssh_runner.go:195] Run: crio --version
	I1123 09:02:44.325732  318073 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:02:44.326843  318073 cli_runner.go:164] Run: docker network inspect pause-397202 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:02:44.344872  318073 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 09:02:44.349579  318073 kubeadm.go:884] updating cluster {Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:02:44.349737  318073 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:02:44.349788  318073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:02:44.380798  318073 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:02:44.380816  318073 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:02:44.380860  318073 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:02:44.407733  318073 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:02:44.407755  318073 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:02:44.407763  318073 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1123 09:02:44.407874  318073 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-397202 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:02:44.407978  318073 ssh_runner.go:195] Run: crio config
	I1123 09:02:44.453732  318073 cni.go:84] Creating CNI manager for ""
	I1123 09:02:44.453751  318073 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:02:44.453767  318073 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:02:44.453789  318073 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-397202 NodeName:pause-397202 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:02:44.453952  318073 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-397202"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:02:44.454048  318073 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:02:44.462498  318073 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:02:44.462561  318073 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:02:44.470535  318073 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1123 09:02:44.483360  318073 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:02:44.496320  318073 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1123 09:02:44.509768  318073 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:02:44.513673  318073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:44.619089  318073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:02:44.632619  318073 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202 for IP: 192.168.94.2
	I1123 09:02:44.632638  318073 certs.go:195] generating shared ca certs ...
	I1123 09:02:44.632666  318073 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:44.632827  318073 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:02:44.632865  318073 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:02:44.632875  318073 certs.go:257] generating profile certs ...
	I1123 09:02:44.632952  318073 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.key
	I1123 09:02:44.633024  318073 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/apiserver.key.fd956988
	I1123 09:02:44.633056  318073 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/proxy-client.key
	I1123 09:02:44.633156  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:02:44.633184  318073 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:02:44.633193  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:02:44.633220  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:02:44.633244  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:02:44.633267  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:02:44.633305  318073 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:02:44.633900  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:02:44.652888  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:02:44.672081  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:02:44.690678  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:02:44.709148  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1123 09:02:44.728081  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:02:44.747278  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:02:44.765761  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:02:44.784060  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:02:44.802312  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:02:44.820921  318073 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:02:44.838887  318073 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:02:44.851918  318073 ssh_runner.go:195] Run: openssl version
	I1123 09:02:44.858151  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:02:44.867594  318073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:44.871677  318073 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:44.871741  318073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:02:44.907024  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:02:44.915559  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:02:44.924342  318073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:02:44.928242  318073 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:02:44.928297  318073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:02:44.963747  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:02:44.972503  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:02:44.981511  318073 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:02:44.985204  318073 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:02:44.985256  318073 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:02:45.023858  318073 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:02:45.032583  318073 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:02:45.037014  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:02:45.076487  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:02:45.116172  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:02:45.158565  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:02:45.198031  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:02:45.238046  318073 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:02:45.276198  318073 kubeadm.go:401] StartCluster: {Name:pause-397202 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-397202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:02:45.276321  318073 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:02:45.276405  318073 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:02:45.304993  318073 cri.go:89] found id: "ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536"
	I1123 09:02:45.305021  318073 cri.go:89] found id: "365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66"
	I1123 09:02:45.305028  318073 cri.go:89] found id: "f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0"
	I1123 09:02:45.305035  318073 cri.go:89] found id: "a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85"
	I1123 09:02:45.305039  318073 cri.go:89] found id: "f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4"
	I1123 09:02:45.305045  318073 cri.go:89] found id: "10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37"
	I1123 09:02:45.305049  318073 cri.go:89] found id: "f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0"
	I1123 09:02:45.305054  318073 cri.go:89] found id: ""
	I1123 09:02:45.305103  318073 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:02:45.318346  318073 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:02:45Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:02:45.318414  318073 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:02:45.326490  318073 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:02:45.326506  318073 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:02:45.326545  318073 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:02:45.334252  318073 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:02:45.335318  318073 kubeconfig.go:125] found "pause-397202" server: "https://192.168.94.2:8443"
	I1123 09:02:45.336868  318073 kapi.go:59] client config for pause-397202: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.key", CAFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:02:45.337379  318073 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1123 09:02:45.337400  318073 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1123 09:02:45.337406  318073 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1123 09:02:45.337412  318073 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1123 09:02:45.337419  318073 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1123 09:02:45.337880  318073 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:02:45.347110  318073 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 09:02:45.347142  318073 kubeadm.go:602] duration metric: took 20.629446ms to restartPrimaryControlPlane
	I1123 09:02:45.347152  318073 kubeadm.go:403] duration metric: took 70.967869ms to StartCluster
	I1123 09:02:45.347171  318073 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:45.347244  318073 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:02:45.348449  318073 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:02:45.348676  318073 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:02:45.348736  318073 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:02:45.348869  318073 config.go:182] Loaded profile config "pause-397202": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:02:45.351393  318073 out.go:179] * Verifying Kubernetes components...
	I1123 09:02:45.351405  318073 out.go:179] * Enabled addons: 
	I1123 09:02:45.352530  318073 addons.go:530] duration metric: took 3.791615ms for enable addons: enabled=[]
	I1123 09:02:45.352565  318073 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:02:45.460657  318073 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:02:45.474419  318073 node_ready.go:35] waiting up to 6m0s for node "pause-397202" to be "Ready" ...
	I1123 09:02:45.482949  318073 node_ready.go:49] node "pause-397202" is "Ready"
	I1123 09:02:45.483004  318073 node_ready.go:38] duration metric: took 8.549199ms for node "pause-397202" to be "Ready" ...
	I1123 09:02:45.483023  318073 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:02:45.483077  318073 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:02:45.494949  318073 api_server.go:72] duration metric: took 146.233705ms to wait for apiserver process to appear ...
	I1123 09:02:45.495026  318073 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:02:45.495055  318073 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1123 09:02:45.499170  318073 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1123 09:02:45.500156  318073 api_server.go:141] control plane version: v1.34.1
	I1123 09:02:45.500187  318073 api_server.go:131] duration metric: took 5.151419ms to wait for apiserver health ...
	I1123 09:02:45.500199  318073 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:02:45.503636  318073 system_pods.go:59] 7 kube-system pods found
	I1123 09:02:45.503664  318073 system_pods.go:61] "coredns-66bc5c9577-llbxg" [9e1f38f4-aec9-4d81-9da4-8077ab957f85] Running
	I1123 09:02:45.503672  318073 system_pods.go:61] "etcd-pause-397202" [a1b3d36c-8f20-462d-893b-f47983b73843] Running
	I1123 09:02:45.503677  318073 system_pods.go:61] "kindnet-hkxw7" [35f423d9-a900-4333-9b4c-835ffc193f45] Running
	I1123 09:02:45.503691  318073 system_pods.go:61] "kube-apiserver-pause-397202" [8c1714af-a1d9-4a7a-b7d7-8b2854751da7] Running
	I1123 09:02:45.503701  318073 system_pods.go:61] "kube-controller-manager-pause-397202" [cc3ffe30-4a58-4276-83ef-87e31c6fbcdd] Running
	I1123 09:02:45.503707  318073 system_pods.go:61] "kube-proxy-qfmgc" [887b8bcb-2b27-42a2-8854-1a7e62edef6b] Running
	I1123 09:02:45.503713  318073 system_pods.go:61] "kube-scheduler-pause-397202" [b554ebb6-788c-4a71-ba03-59696a8a1649] Running
	I1123 09:02:45.503722  318073 system_pods.go:74] duration metric: took 3.515654ms to wait for pod list to return data ...
	I1123 09:02:45.503732  318073 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:02:45.505874  318073 default_sa.go:45] found service account: "default"
	I1123 09:02:45.505899  318073 default_sa.go:55] duration metric: took 2.158632ms for default service account to be created ...
	I1123 09:02:45.505909  318073 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:02:45.508705  318073 system_pods.go:86] 7 kube-system pods found
	I1123 09:02:45.508732  318073 system_pods.go:89] "coredns-66bc5c9577-llbxg" [9e1f38f4-aec9-4d81-9da4-8077ab957f85] Running
	I1123 09:02:45.508739  318073 system_pods.go:89] "etcd-pause-397202" [a1b3d36c-8f20-462d-893b-f47983b73843] Running
	I1123 09:02:45.508744  318073 system_pods.go:89] "kindnet-hkxw7" [35f423d9-a900-4333-9b4c-835ffc193f45] Running
	I1123 09:02:45.508749  318073 system_pods.go:89] "kube-apiserver-pause-397202" [8c1714af-a1d9-4a7a-b7d7-8b2854751da7] Running
	I1123 09:02:45.508754  318073 system_pods.go:89] "kube-controller-manager-pause-397202" [cc3ffe30-4a58-4276-83ef-87e31c6fbcdd] Running
	I1123 09:02:45.508760  318073 system_pods.go:89] "kube-proxy-qfmgc" [887b8bcb-2b27-42a2-8854-1a7e62edef6b] Running
	I1123 09:02:45.508766  318073 system_pods.go:89] "kube-scheduler-pause-397202" [b554ebb6-788c-4a71-ba03-59696a8a1649] Running
	I1123 09:02:45.508775  318073 system_pods.go:126] duration metric: took 2.858706ms to wait for k8s-apps to be running ...
	I1123 09:02:45.508788  318073 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:02:45.508835  318073 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:45.522854  318073 system_svc.go:56] duration metric: took 14.057855ms WaitForService to wait for kubelet
	I1123 09:02:45.522882  318073 kubeadm.go:587] duration metric: took 174.170367ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:02:45.522910  318073 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:02:45.525149  318073 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:02:45.525171  318073 node_conditions.go:123] node cpu capacity is 8
	I1123 09:02:45.525185  318073 node_conditions.go:105] duration metric: took 2.269672ms to run NodePressure ...
	I1123 09:02:45.525197  318073 start.go:242] waiting for startup goroutines ...
	I1123 09:02:45.525203  318073 start.go:247] waiting for cluster config update ...
	I1123 09:02:45.525211  318073 start.go:256] writing updated cluster config ...
	I1123 09:02:45.525467  318073 ssh_runner.go:195] Run: rm -f paused
	I1123 09:02:45.529141  318073 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:02:45.529844  318073 kapi.go:59] client config for pause-397202: &rest.Config{Host:"https://192.168.94.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.crt", KeyFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/profiles/pause-397202/client.key", CAFile:"/home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]s
tring(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2814ee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1123 09:02:45.532514  318073 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.536617  318073 pod_ready.go:94] pod "coredns-66bc5c9577-llbxg" is "Ready"
	I1123 09:02:45.536639  318073 pod_ready.go:86] duration metric: took 4.101558ms for pod "coredns-66bc5c9577-llbxg" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.538626  318073 pod_ready.go:83] waiting for pod "etcd-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.542362  318073 pod_ready.go:94] pod "etcd-pause-397202" is "Ready"
	I1123 09:02:45.542387  318073 pod_ready.go:86] duration metric: took 3.742411ms for pod "etcd-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.544199  318073 pod_ready.go:83] waiting for pod "kube-apiserver-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.548749  318073 pod_ready.go:94] pod "kube-apiserver-pause-397202" is "Ready"
	I1123 09:02:45.548773  318073 pod_ready.go:86] duration metric: took 4.557382ms for pod "kube-apiserver-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.550710  318073 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:45.933462  318073 pod_ready.go:94] pod "kube-controller-manager-pause-397202" is "Ready"
	I1123 09:02:45.933491  318073 pod_ready.go:86] duration metric: took 382.762779ms for pod "kube-controller-manager-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:46.133278  318073 pod_ready.go:83] waiting for pod "kube-proxy-qfmgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:46.532928  318073 pod_ready.go:94] pod "kube-proxy-qfmgc" is "Ready"
	I1123 09:02:46.532956  318073 pod_ready.go:86] duration metric: took 399.651771ms for pod "kube-proxy-qfmgc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:46.733087  318073 pod_ready.go:83] waiting for pod "kube-scheduler-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:47.132932  318073 pod_ready.go:94] pod "kube-scheduler-pause-397202" is "Ready"
	I1123 09:02:47.132958  318073 pod_ready.go:86] duration metric: took 399.847159ms for pod "kube-scheduler-pause-397202" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:02:47.132992  318073 pod_ready.go:40] duration metric: took 1.603819402s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:02:47.176313  318073 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:02:47.178462  318073 out.go:179] * Done! kubectl is now configured to use "pause-397202" cluster and "default" namespace by default
	I1123 09:02:44.980013  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:02:44.980403  284685 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1123 09:02:44.980453  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1123 09:02:44.980496  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1123 09:02:45.007712  284685 cri.go:89] found id: "7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:45.007733  284685 cri.go:89] found id: ""
	I1123 09:02:45.007743  284685 logs.go:282] 1 containers: [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd]
	I1123 09:02:45.007805  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:45.011838  284685 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1123 09:02:45.011903  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1123 09:02:45.039421  284685 cri.go:89] found id: ""
	I1123 09:02:45.039448  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.039462  284685 logs.go:284] No container was found matching "etcd"
	I1123 09:02:45.039469  284685 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1123 09:02:45.039527  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1123 09:02:45.066402  284685 cri.go:89] found id: ""
	I1123 09:02:45.066429  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.066438  284685 logs.go:284] No container was found matching "coredns"
	I1123 09:02:45.066446  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1123 09:02:45.066500  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1123 09:02:45.093672  284685 cri.go:89] found id: "fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:45.093696  284685 cri.go:89] found id: ""
	I1123 09:02:45.093704  284685 logs.go:282] 1 containers: [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d]
	I1123 09:02:45.093763  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:45.097613  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1123 09:02:45.097679  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1123 09:02:45.124851  284685 cri.go:89] found id: ""
	I1123 09:02:45.124873  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.124885  284685 logs.go:284] No container was found matching "kube-proxy"
	I1123 09:02:45.124891  284685 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1123 09:02:45.124952  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1123 09:02:45.153700  284685 cri.go:89] found id: "d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:45.153721  284685 cri.go:89] found id: ""
	I1123 09:02:45.153730  284685 logs.go:282] 1 containers: [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306]
	I1123 09:02:45.153779  284685 ssh_runner.go:195] Run: which crictl
	I1123 09:02:45.157778  284685 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1123 09:02:45.157843  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1123 09:02:45.185702  284685 cri.go:89] found id: ""
	I1123 09:02:45.185731  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.185742  284685 logs.go:284] No container was found matching "kindnet"
	I1123 09:02:45.185749  284685 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1123 09:02:45.185811  284685 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1123 09:02:45.214981  284685 cri.go:89] found id: ""
	I1123 09:02:45.215012  284685 logs.go:282] 0 containers: []
	W1123 09:02:45.215021  284685 logs.go:284] No container was found matching "storage-provisioner"
	I1123 09:02:45.215031  284685 logs.go:123] Gathering logs for kube-controller-manager [d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306] ...
	I1123 09:02:45.215049  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d38c81ac5f3288a5c1ad59bb1d159bb76a305a9504a74fa14594dca89bdf4306"
	I1123 09:02:45.242984  284685 logs.go:123] Gathering logs for CRI-O ...
	I1123 09:02:45.243015  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1123 09:02:45.287828  284685 logs.go:123] Gathering logs for container status ...
	I1123 09:02:45.287859  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1123 09:02:45.320426  284685 logs.go:123] Gathering logs for kubelet ...
	I1123 09:02:45.320452  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1123 09:02:45.407753  284685 logs.go:123] Gathering logs for dmesg ...
	I1123 09:02:45.407794  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1123 09:02:45.425107  284685 logs.go:123] Gathering logs for describe nodes ...
	I1123 09:02:45.425134  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1123 09:02:45.486361  284685 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1123 09:02:45.486380  284685 logs.go:123] Gathering logs for kube-apiserver [7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd] ...
	I1123 09:02:45.486395  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 7625a0bbc7717e3a2af3d02daa04ca8fad92012d00a0e55d65b4e67dfb7e49bd"
	I1123 09:02:45.521325  284685 logs.go:123] Gathering logs for kube-scheduler [fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d] ...
	I1123 09:02:45.521351  284685 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 fade67abbc989ba285c7606dd3499da4894a7554fc5ea7609dbd2b5cad03e58d"
	I1123 09:02:48.073993  284685 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:02:50.955939  317334 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl stop --timeout=10 8653fd007ce583a2d825eb177fdef0cce573312f336809a2c9ce21ec4787bdf8 e03d60227209ee0a10353ceee3143cd3a825f70fbe920c9b6a144db4991ee676 f1b9fa1dd04a10f21f27a858f15713e3827efdf9ddb6e87ae16648c562ab8894 4bf24e6b47b0cec1bfec2975525a89c97ba7b454c63c75a198832221c2ee9e14: (10.761650122s)
	I1123 09:02:50.956022  317334 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:02:50.971523  317334 out.go:179]   - Kubernetes: Stopped
	
	
	==> CRI-O <==
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.1286646Z" level=info msg="RDT not available in the host system"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.128674874Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.129458981Z" level=info msg="Conmon does support the --sync option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.129475963Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.129488262Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.130199626Z" level=info msg="Conmon does support the --sync option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.130214292Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.134105481Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.134126005Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.134596968Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.135024236Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.135079551Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.226827558Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-llbxg Namespace:kube-system ID:90a598823f988fad5a7e76487f2384502f4d083c42274b42452ef472849ffc26 UID:9e1f38f4-aec9-4d81-9da4-8077ab957f85 NetNS:/var/run/netns/c92d3da4-25b7-4280-a036-2b31bd0a0a2e Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000292428}] Aliases:map[]}"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.22707027Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-llbxg for CNI network kindnet (type=ptp)"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227514819Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227538125Z" level=info msg="Starting seccomp notifier watcher"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227586414Z" level=info msg="Create NRI interface"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227688972Z" level=info msg="built-in NRI default validator is disabled"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227701141Z" level=info msg="runtime interface created"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227710851Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227715599Z" level=info msg="runtime interface starting up..."
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227720732Z" level=info msg="starting plugins..."
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.227731798Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 23 09:02:44 pause-397202 crio[2132]: time="2025-11-23T09:02:44.228071719Z" level=info msg="No systemd watchdog enabled"
	Nov 23 09:02:44 pause-397202 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	ccc42c322c200       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   14 seconds ago      Running             coredns                   0                   90a598823f988       coredns-66bc5c9577-llbxg               kube-system
	365b3b573a77a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   25 seconds ago      Running             kube-proxy                0                   1b773a159057c       kube-proxy-qfmgc                       kube-system
	f3d24f3739abc       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   25 seconds ago      Running             kindnet-cni               0                   d467a56173100       kindnet-hkxw7                          kube-system
	a028a05b2a794       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   35 seconds ago      Running             kube-scheduler            0                   9c3231f729b3f       kube-scheduler-pause-397202            kube-system
	f5c1bc194c3b4       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   35 seconds ago      Running             kube-apiserver            0                   f90841af713aa       kube-apiserver-pause-397202            kube-system
	10634abd56000       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   35 seconds ago      Running             kube-controller-manager   0                   f9a46522a2b0d       kube-controller-manager-pause-397202   kube-system
	f9b138bbbfef9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   35 seconds ago      Running             etcd                      0                   758e4507dfb10       etcd-pause-397202                      kube-system
	
	
	==> coredns [ccc42c322c200e45d184ea9bd71d69ba34b954789b671365dce7545055e88536] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51247 - 18016 "HINFO IN 8093919022219011200.8258790786039506546. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.02611965s
	
	
	==> describe nodes <==
	Name:               pause-397202
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-397202
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=pause-397202
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_02_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:02:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-397202
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:02:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:02:37 +0000   Sun, 23 Nov 2025 09:02:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-397202
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                7553187a-fd51-4de3-8874-b0ec6f7b6f6b
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-llbxg                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-pause-397202                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-hkxw7                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-pause-397202             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-397202    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-qfmgc                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-pause-397202             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-397202 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-397202 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-397202 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node pause-397202 event: Registered Node pause-397202 in Controller
	  Normal  NodeReady                15s   kubelet          Node pause-397202 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 06 82 4b 59 78 74 08 06
	[Nov23 08:13] IPv4: martian source 10.244.0.1 from 10.244.0.51, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 73 2a 74 8f 84 08 06
	[Nov23 08:22] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.017594] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023854] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.023902] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.024926] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +1.022928] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +2.047819] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +4.031665] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[  +8.255342] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[Nov23 08:23] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	[ +32.253523] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 52 89 3b 40 d7 04 72 61 ff 75 a6 33 08 00
	
	
	==> etcd [f9b138bbbfef9748bb9fc39c82d498ae87ac8d5da5ed98f16b602617b6e822b0] <==
	{"level":"info","ts":"2025-11-23T09:02:22.676683Z","caller":"traceutil/trace.go:172","msg":"trace[288519952] range","detail":"{range_begin:/registry/clusterroles/kindnet; range_end:; response_count:0; response_revision:267; }","duration":"127.464378ms","start":"2025-11-23T09:02:22.549160Z","end":"2025-11-23T09:02:22.676624Z","steps":["trace[288519952] 'agreement among raft nodes before linearized reading'  (duration: 126.666412ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:22.676880Z","caller":"traceutil/trace.go:172","msg":"trace[707599391] transaction","detail":"{read_only:false; response_revision:268; number_of_response:1; }","duration":"137.579912ms","start":"2025-11-23T09:02:22.539287Z","end":"2025-11-23T09:02:22.676867Z","steps":["trace[707599391] 'process raft request'  (duration: 136.598876ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:22.676754Z","caller":"traceutil/trace.go:172","msg":"trace[1765641848] transaction","detail":"{read_only:false; response_revision:269; number_of_response:1; }","duration":"136.060377ms","start":"2025-11-23T09:02:22.540683Z","end":"2025-11-23T09:02:22.676743Z","steps":["trace[1765641848] 'process raft request'  (duration: 135.989812ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:22.818065Z","caller":"traceutil/trace.go:172","msg":"trace[1370234003] linearizableReadLoop","detail":"{readStateIndex:278; appliedIndex:278; }","duration":"135.524728ms","start":"2025-11-23T09:02:22.682516Z","end":"2025-11-23T09:02:22.818041Z","steps":["trace[1370234003] 'read index received'  (duration: 135.515341ms)","trace[1370234003] 'applied index is now lower than readState.Index'  (duration: 8.155µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.028946Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"346.384271ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:02:23.029039Z","caller":"traceutil/trace.go:172","msg":"trace[1892397226] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/resourcequota-controller; range_end:; response_count:0; response_revision:269; }","duration":"346.509757ms","start":"2025-11-23T09:02:22.682511Z","end":"2025-11-23T09:02:23.029021Z","steps":["trace[1892397226] 'agreement among raft nodes before linearized reading'  (duration: 135.616476ms)","trace[1892397226] 'range keys from in-memory index tree'  (duration: 210.735397ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.029078Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:02:22.682497Z","time spent":"346.565843ms","remote":"127.0.0.1:44280","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":0,"response size":29,"request content":"key:\"/registry/serviceaccounts/kube-system/resourcequota-controller\" limit:1 "}
	{"level":"warn","ts":"2025-11-23T09:02:23.030224Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.627867ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361752330637 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/kindnet\" value_size:1042 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:02:23.030389Z","caller":"traceutil/trace.go:172","msg":"trace[1303714613] transaction","detail":"{read_only:false; response_revision:271; number_of_response:1; }","duration":"343.11176ms","start":"2025-11-23T09:02:22.687265Z","end":"2025-11-23T09:02:23.030377Z","steps":["trace[1303714613] 'process raft request'  (duration: 343.033489ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:23.030419Z","caller":"traceutil/trace.go:172","msg":"trace[980065289] transaction","detail":"{read_only:false; response_revision:270; number_of_response:1; }","duration":"349.593419ms","start":"2025-11-23T09:02:22.680804Z","end":"2025-11-23T09:02:23.030397Z","steps":["trace[980065289] 'process raft request'  (duration: 137.352058ms)","trace[980065289] 'compare'  (duration: 211.063448ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.030593Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:02:22.687247Z","time spent":"343.17215ms","remote":"127.0.0.1:44232","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7268,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" mod_revision:243 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" value_size:7197 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" > >"}
	{"level":"warn","ts":"2025-11-23T09:02:23.030696Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-23T09:02:22.680788Z","time spent":"349.871873ms","remote":"127.0.0.1:44598","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1080,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/clusterroles/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/kindnet\" value_size:1042 >> failure:<>"}
	{"level":"info","ts":"2025-11-23T09:02:23.300403Z","caller":"traceutil/trace.go:172","msg":"trace[1497327376] linearizableReadLoop","detail":"{readStateIndex:283; appliedIndex:283; }","duration":"178.318571ms","start":"2025-11-23T09:02:23.122046Z","end":"2025-11-23T09:02:23.300365Z","steps":["trace[1497327376] 'read index received'  (duration: 178.307571ms)","trace[1497327376] 'applied index is now lower than readState.Index'  (duration: 9.033µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.404890Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"282.818071ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-23T09:02:23.404951Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.394209ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361752330652 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kindnet\" value_size:452 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:02:23.405031Z","caller":"traceutil/trace.go:172","msg":"trace[1914715188] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"290.607273ms","start":"2025-11-23T09:02:23.114409Z","end":"2025-11-23T09:02:23.405016Z","steps":["trace[1914715188] 'process raft request'  (duration: 186.083331ms)","trace[1914715188] 'compare'  (duration: 104.171404ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:02:23.405383Z","caller":"traceutil/trace.go:172","msg":"trace[2070636495] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler; range_end:; response_count:0; response_revision:274; }","duration":"282.906314ms","start":"2025-11-23T09:02:23.122041Z","end":"2025-11-23T09:02:23.404947Z","steps":["trace[2070636495] 'agreement among raft nodes before linearized reading'  (duration: 178.41807ms)","trace[2070636495] 'range keys from in-memory index tree'  (duration: 104.372222ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:02:23.406117Z","caller":"traceutil/trace.go:172","msg":"trace[442467875] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"290.322103ms","start":"2025-11-23T09:02:23.115783Z","end":"2025-11-23T09:02:23.406105Z","steps":["trace[442467875] 'process raft request'  (duration: 290.225421ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:02:23.663606Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.303032ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:02:23.663684Z","caller":"traceutil/trace.go:172","msg":"trace[1267289251] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:0; response_revision:278; }","duration":"160.393459ms","start":"2025-11-23T09:02:23.503270Z","end":"2025-11-23T09:02:23.663664Z","steps":["trace[1267289251] 'agreement among raft nodes before linearized reading'  (duration: 20.663787ms)","trace[1267289251] 'range keys from in-memory index tree'  (duration: 139.604648ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:02:23.663743Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.689529ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361752330664 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" mod_revision:271 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" value_size:7407 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-397202\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:02:23.663907Z","caller":"traceutil/trace.go:172","msg":"trace[1185611390] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"162.097745ms","start":"2025-11-23T09:02:23.501785Z","end":"2025-11-23T09:02:23.663883Z","steps":["trace[1185611390] 'process raft request'  (duration: 22.203301ms)","trace[1185611390] 'compare'  (duration: 139.582846ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:02:23.786013Z","caller":"traceutil/trace.go:172","msg":"trace[61419163] transaction","detail":"{read_only:false; response_revision:281; number_of_response:1; }","duration":"115.740236ms","start":"2025-11-23T09:02:23.670252Z","end":"2025-11-23T09:02:23.785992Z","steps":["trace[61419163] 'process raft request'  (duration: 113.262148ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:23.786140Z","caller":"traceutil/trace.go:172","msg":"trace[1259845641] transaction","detail":"{read_only:false; response_revision:282; number_of_response:1; }","duration":"113.500679ms","start":"2025-11-23T09:02:23.672623Z","end":"2025-11-23T09:02:23.786124Z","steps":["trace[1259845641] 'process raft request'  (duration: 113.218051ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:02:23.786153Z","caller":"traceutil/trace.go:172","msg":"trace[1029052553] transaction","detail":"{read_only:false; response_revision:283; number_of_response:1; }","duration":"112.387813ms","start":"2025-11-23T09:02:23.673754Z","end":"2025-11-23T09:02:23.786141Z","steps":["trace[1029052553] 'process raft request'  (duration: 112.279067ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:02:52 up  1:45,  0 user,  load average: 6.33, 3.37, 1.98
	Linux pause-397202 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f3d24f3739abc889dcbb426abbf3b380336ddafb494a0b1d64a843f6189a19d0] <==
	I1123 09:02:27.192083       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:02:27.192327       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 09:02:27.192479       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:02:27.192501       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:02:27.192525       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:02:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:02:27.394821       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:02:27.566062       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:02:27.566121       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:02:27.587884       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:02:27.887718       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:02:27.887766       1 metrics.go:72] Registering metrics
	I1123 09:02:27.888208       1 controller.go:711] "Syncing nftables rules"
	I1123 09:02:37.396079       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:02:37.396143       1 main.go:301] handling current node
	I1123 09:02:47.398492       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:02:47.398523       1 main.go:301] handling current node
	
	
	==> kube-apiserver [f5c1bc194c3b4fc7b5d8e2f47b51845d9a335c13f9879769b619d883841f25f4] <==
	E1123 09:02:18.663590       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1123 09:02:18.696360       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1123 09:02:18.711116       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:02:18.718130       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:18.718169       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:02:18.729739       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:18.731873       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:02:18.899884       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:02:19.512952       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:02:19.517818       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:02:19.517839       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:02:20.098183       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:02:20.150022       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:02:20.219205       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:02:20.226732       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 09:02:20.228479       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:02:20.233475       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:02:20.542291       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:02:21.365081       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:02:21.386043       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:02:21.399730       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:02:26.142944       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:26.147061       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:02:26.191610       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:02:26.591155       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [10634abd560004335d2e9611aa603556560fb6704e2dd0a376e2af47be6e9d37] <==
	I1123 09:02:25.536649       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:02:25.536766       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:02:25.536816       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:02:25.536833       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:02:25.536843       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:02:25.537236       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:02:25.539440       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1123 09:02:25.539469       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:02:25.539931       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:02:25.541767       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:02:25.541892       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:02:25.541949       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:02:25.541962       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:02:25.541980       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:02:25.542852       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:02:25.544279       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:02:25.547449       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:02:25.547765       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-397202" podCIDRs=["10.244.0.0/24"]
	I1123 09:02:25.548732       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:02:25.549504       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:02:25.554014       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:02:25.562240       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:02:25.564416       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:02:25.570895       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:02:40.479416       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [365b3b573a77a2ff0a22deddb7fdb06e6b2bc920107e22244e4820bc5137df66] <==
	I1123 09:02:27.017156       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:02:27.084809       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:02:27.185221       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:02:27.185256       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 09:02:27.185359       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:02:27.205003       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:02:27.205051       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:02:27.210226       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:02:27.210613       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:02:27.210632       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:02:27.212197       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:02:27.212221       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:02:27.212254       1 config.go:200] "Starting service config controller"
	I1123 09:02:27.212260       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:02:27.212288       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:02:27.212295       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:02:27.212411       1 config.go:309] "Starting node config controller"
	I1123 09:02:27.212418       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:02:27.212425       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:02:27.313391       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:02:27.313383       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:02:27.313398       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a028a05b2a7941979bb89b131402d5423bd73f7f4ad4b230d4a58cf622da8d85] <==
	E1123 09:02:18.580837       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:02:18.581135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:02:18.581316       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:02:18.581327       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:02:18.581394       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:02:18.581518       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:02:18.581821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:02:18.582114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:02:18.582255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:02:18.582323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:02:18.582464       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:02:18.582610       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:02:19.516287       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:02:19.549553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:02:19.566113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:02:19.665810       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:02:19.677039       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:02:19.697649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:02:19.710821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:02:19.753116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:02:19.771817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:02:19.856284       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:02:19.862553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:02:19.878773       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1123 09:02:21.668698       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:02:22 pause-397202 kubelet[1314]: I1123 09:02:22.678505    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-397202" podStartSLOduration=1.678479531 podStartE2EDuration="1.678479531s" podCreationTimestamp="2025-11-23 09:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:22.53222513 +0000 UTC m=+1.422905565" watchObservedRunningTime="2025-11-23 09:02:22.678479531 +0000 UTC m=+1.569159950"
	Nov 23 09:02:23 pause-397202 kubelet[1314]: I1123 09:02:23.032035    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-397202" podStartSLOduration=2.03201147 podStartE2EDuration="2.03201147s" podCreationTimestamp="2025-11-23 09:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:22.678723341 +0000 UTC m=+1.569403768" watchObservedRunningTime="2025-11-23 09:02:23.03201147 +0000 UTC m=+1.922691894"
	Nov 23 09:02:23 pause-397202 kubelet[1314]: I1123 09:02:23.106327    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-397202" podStartSLOduration=2.106302468 podStartE2EDuration="2.106302468s" podCreationTimestamp="2025-11-23 09:02:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:23.032351813 +0000 UTC m=+1.923032215" watchObservedRunningTime="2025-11-23 09:02:23.106302468 +0000 UTC m=+1.996982887"
	Nov 23 09:02:23 pause-397202 kubelet[1314]: I1123 09:02:23.409341    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-397202" podStartSLOduration=4.409303589 podStartE2EDuration="4.409303589s" podCreationTimestamp="2025-11-23 09:02:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:23.10669412 +0000 UTC m=+1.997374526" watchObservedRunningTime="2025-11-23 09:02:23.409303589 +0000 UTC m=+2.299983994"
	Nov 23 09:02:25 pause-397202 kubelet[1314]: I1123 09:02:25.571564    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:02:25 pause-397202 kubelet[1314]: I1123 09:02:25.572344    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630361    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/35f423d9-a900-4333-9b4c-835ffc193f45-xtables-lock\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630399    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/35f423d9-a900-4333-9b4c-835ffc193f45-lib-modules\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630423    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/887b8bcb-2b27-42a2-8854-1a7e62edef6b-kube-proxy\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630457    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/887b8bcb-2b27-42a2-8854-1a7e62edef6b-xtables-lock\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630520    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/35f423d9-a900-4333-9b4c-835ffc193f45-cni-cfg\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630546    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94n5v\" (UniqueName: \"kubernetes.io/projected/35f423d9-a900-4333-9b4c-835ffc193f45-kube-api-access-94n5v\") pod \"kindnet-hkxw7\" (UID: \"35f423d9-a900-4333-9b4c-835ffc193f45\") " pod="kube-system/kindnet-hkxw7"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630597    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w59rx\" (UniqueName: \"kubernetes.io/projected/887b8bcb-2b27-42a2-8854-1a7e62edef6b-kube-api-access-w59rx\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:26 pause-397202 kubelet[1314]: I1123 09:02:26.630712    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/887b8bcb-2b27-42a2-8854-1a7e62edef6b-lib-modules\") pod \"kube-proxy-qfmgc\" (UID: \"887b8bcb-2b27-42a2-8854-1a7e62edef6b\") " pod="kube-system/kube-proxy-qfmgc"
	Nov 23 09:02:27 pause-397202 kubelet[1314]: I1123 09:02:27.274365    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qfmgc" podStartSLOduration=1.274340133 podStartE2EDuration="1.274340133s" podCreationTimestamp="2025-11-23 09:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:27.274226187 +0000 UTC m=+6.164906621" watchObservedRunningTime="2025-11-23 09:02:27.274340133 +0000 UTC m=+6.165020555"
	Nov 23 09:02:27 pause-397202 kubelet[1314]: I1123 09:02:27.283718    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-hkxw7" podStartSLOduration=1.283694572 podStartE2EDuration="1.283694572s" podCreationTimestamp="2025-11-23 09:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:27.283499565 +0000 UTC m=+6.174179988" watchObservedRunningTime="2025-11-23 09:02:27.283694572 +0000 UTC m=+6.174374994"
	Nov 23 09:02:37 pause-397202 kubelet[1314]: I1123 09:02:37.947752    1314 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:02:38 pause-397202 kubelet[1314]: I1123 09:02:38.020618    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4n92f\" (UniqueName: \"kubernetes.io/projected/9e1f38f4-aec9-4d81-9da4-8077ab957f85-kube-api-access-4n92f\") pod \"coredns-66bc5c9577-llbxg\" (UID: \"9e1f38f4-aec9-4d81-9da4-8077ab957f85\") " pod="kube-system/coredns-66bc5c9577-llbxg"
	Nov 23 09:02:38 pause-397202 kubelet[1314]: I1123 09:02:38.020685    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e1f38f4-aec9-4d81-9da4-8077ab957f85-config-volume\") pod \"coredns-66bc5c9577-llbxg\" (UID: \"9e1f38f4-aec9-4d81-9da4-8077ab957f85\") " pod="kube-system/coredns-66bc5c9577-llbxg"
	Nov 23 09:02:39 pause-397202 kubelet[1314]: I1123 09:02:39.306852    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-llbxg" podStartSLOduration=13.306829748 podStartE2EDuration="13.306829748s" podCreationTimestamp="2025-11-23 09:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:02:39.306432581 +0000 UTC m=+18.197113003" watchObservedRunningTime="2025-11-23 09:02:39.306829748 +0000 UTC m=+18.197510171"
	Nov 23 09:02:47 pause-397202 kubelet[1314]: I1123 09:02:47.582821    1314 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 09:02:47 pause-397202 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:02:47 pause-397202 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:02:47 pause-397202 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:02:47 pause-397202 systemd[1]: kubelet.service: Consumed 1.210s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-397202 -n pause-397202
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-397202 -n pause-397202: exit status 2 (360.917271ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-397202 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.97s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (274.685275ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:07:12Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-054094 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-054094 describe deploy/metrics-server -n kube-system: exit status 1 (70.007301ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-054094 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-054094
helpers_test.go:243: (dbg) docker inspect old-k8s-version-054094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3",
	        "Created": "2025-11-23T09:06:14.055238477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 380141,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:06:14.112119329Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/hosts",
	        "LogPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3-json.log",
	        "Name": "/old-k8s-version-054094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-054094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-054094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3",
	                "LowerDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-054094",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-054094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-054094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-054094",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-054094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b3f3aa51cf44ca4e898557742615ddfd9a5ed9b6c51cbc27a46c6e3c963c527",
	            "SandboxKey": "/var/run/docker/netns/6b3f3aa51cf4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33088"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-054094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76e5790841e8d84532c8d28d1be8e40ba53fa4abb8a22eef487cc6e2d204979d",
	                    "EndpointID": "2421217780d7244615a80d34407b65c8be67a489eae257a6f4a5477e6f9d9d6d",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "3a:83:32:56:47:13",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-054094",
	                        "6fbb3e1692df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-054094 logs -n 25: (1.112362802s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-741183 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │ 23 Nov 25 09:06 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │ 23 Nov 25 09:06 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │ 23 Nov 25 09:06 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo docker system info                                                                                                                                 │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cri-dockerd --version                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo containerd config dump                                                                                                                             │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo crio config                                                                                                                                        │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p bridge-741183                                                                                                                                                         │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                          │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:07:08
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:07:08.320858  401015 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:07:08.320987  401015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:08.320998  401015 out.go:374] Setting ErrFile to fd 2...
	I1123 09:07:08.321005  401015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:08.321255  401015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:07:08.321772  401015 out.go:368] Setting JSON to false
	I1123 09:07:08.323156  401015 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6568,"bootTime":1763882260,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:07:08.323224  401015 start.go:143] virtualization: kvm guest
	I1123 09:07:08.325128  401015 out.go:179] * [default-k8s-diff-port-602386] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:07:08.327865  401015 notify.go:221] Checking for updates...
	I1123 09:07:08.327890  401015 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:07:08.329123  401015 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:07:08.330266  401015 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:07:08.331594  401015 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:07:08.332728  401015 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:07:08.333949  401015 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:07:08.335800  401015 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:08.335979  401015 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:08.336118  401015 config.go:182] Loaded profile config "old-k8s-version-054094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 09:07:08.336251  401015 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:07:08.361501  401015 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:07:08.361678  401015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:08.434623  401015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:07:08.421119525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:08.434832  401015 docker.go:319] overlay module found
	I1123 09:07:08.436923  401015 out.go:179] * Using the docker driver based on user configuration
	I1123 09:07:08.438018  401015 start.go:309] selected driver: docker
	I1123 09:07:08.438033  401015 start.go:927] validating driver "docker" against <nil>
	I1123 09:07:08.438053  401015 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:07:08.438550  401015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:08.506899  401015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:07:08.495378652 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:08.507109  401015 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:07:08.507315  401015 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:08.509202  401015 out.go:179] * Using Docker driver with root privileges
	I1123 09:07:08.510308  401015 cni.go:84] Creating CNI manager for ""
	I1123 09:07:08.510389  401015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:08.510404  401015 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:07:08.510472  401015 start.go:353] cluster config:
	{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:08.511714  401015 out.go:179] * Starting "default-k8s-diff-port-602386" primary control-plane node in "default-k8s-diff-port-602386" cluster
	I1123 09:07:08.512736  401015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:07:08.513843  401015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:07:08.514862  401015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:08.514894  401015 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:07:08.514903  401015 cache.go:65] Caching tarball of preloaded images
	I1123 09:07:08.514950  401015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:07:08.515008  401015 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:07:08.515024  401015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:07:08.515120  401015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json ...
	I1123 09:07:08.515148  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json: {Name:mk8f3b6ec1fd2a4559a32a2a474b74464c0f0ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:08.537540  401015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:07:08.537562  401015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:07:08.537583  401015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:07:08.537631  401015 start.go:360] acquireMachinesLock for default-k8s-diff-port-602386: {Name:mk936d882fdf1c8707634b4555fdb3d8130ce5fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:08.537742  401015 start.go:364] duration metric: took 92.278µs to acquireMachinesLock for "default-k8s-diff-port-602386"
	I1123 09:07:08.537772  401015 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:07:08.537865  401015 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:07:04.372774  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	W1123 09:07:06.452666  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	W1123 09:07:08.872555  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	I1123 09:07:06.045023  397302 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-529341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.017097372s)
	I1123 09:07:06.045071  397302 kic.go:203] duration metric: took 5.017240686s to extract preloaded images to volume ...
	W1123 09:07:06.045164  397302 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:07:06.045212  397302 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:07:06.045265  397302 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:07:06.129340  397302 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-529341 --name embed-certs-529341 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-529341 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-529341 --network embed-certs-529341 --ip 192.168.103.2 --volume embed-certs-529341:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:07:06.753057  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Running}}
	I1123 09:07:06.772960  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:06.792229  397302 cli_runner.go:164] Run: docker exec embed-certs-529341 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:07:06.841376  397302 oci.go:144] the created container "embed-certs-529341" has a running status.
	I1123 09:07:06.841415  397302 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa...
	I1123 09:07:06.950550  397302 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:07:06.985457  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:07.004386  397302 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:07:07.004413  397302 kic_runner.go:114] Args: [docker exec --privileged embed-certs-529341 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:07:07.067651  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:07.091716  397302 machine.go:94] provisionDockerMachine start ...
	I1123 09:07:07.091860  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.116021  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.116602  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.116650  397302 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:07:07.270199  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-529341
	
	I1123 09:07:07.270227  397302 ubuntu.go:182] provisioning hostname "embed-certs-529341"
	I1123 09:07:07.270296  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.291192  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.291472  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.291495  397302 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-529341 && echo "embed-certs-529341" | sudo tee /etc/hostname
	I1123 09:07:07.466762  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-529341
	
	I1123 09:07:07.466851  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.489355  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.489763  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.489790  397302 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-529341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-529341/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-529341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:07:07.666676  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:07:07.666712  397302 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:07:07.666734  397302 ubuntu.go:190] setting up certificates
	I1123 09:07:07.666753  397302 provision.go:84] configureAuth start
	I1123 09:07:07.666807  397302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-529341
	I1123 09:07:07.685513  397302 provision.go:143] copyHostCerts
	I1123 09:07:07.685579  397302 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:07:07.685599  397302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:07:07.685661  397302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:07:07.685768  397302 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:07:07.685779  397302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:07:07.685813  397302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:07:07.685885  397302 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:07:07.685896  397302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:07:07.685925  397302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:07:07.686016  397302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.embed-certs-529341 san=[127.0.0.1 192.168.103.2 embed-certs-529341 localhost minikube]
	I1123 09:07:07.726091  397302 provision.go:177] copyRemoteCerts
	I1123 09:07:07.726152  397302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:07:07.726193  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.746067  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:07.848621  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:07:07.877290  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:07:07.897030  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:07:07.938111  397302 provision.go:87] duration metric: took 271.341725ms to configureAuth
	I1123 09:07:07.938137  397302 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:07:07.938281  397302 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:07.938378  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.956668  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.956913  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.956932  397302 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:07:08.253272  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:07:08.253301  397302 machine.go:97] duration metric: took 1.161555751s to provisionDockerMachine
	I1123 09:07:08.253313  397302 client.go:176] duration metric: took 7.87499223s to LocalClient.Create
	I1123 09:07:08.253326  397302 start.go:167] duration metric: took 7.875051823s to libmachine.API.Create "embed-certs-529341"
	I1123 09:07:08.253333  397302 start.go:293] postStartSetup for "embed-certs-529341" (driver="docker")
	I1123 09:07:08.253342  397302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:07:08.253399  397302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:07:08.253442  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.273300  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.380260  397302 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:07:08.385526  397302 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:07:08.385574  397302 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:07:08.385588  397302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:07:08.385661  397302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:07:08.385758  397302 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:07:08.385892  397302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:07:08.397303  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:07:08.426991  397302 start.go:296] duration metric: took 173.642066ms for postStartSetup
	I1123 09:07:08.427415  397302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-529341
	I1123 09:07:08.450325  397302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/config.json ...
	I1123 09:07:08.450700  397302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:07:08.450761  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.472787  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.581526  397302 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:07:08.586140  397302 start.go:128] duration metric: took 8.21011541s to createHost
	I1123 09:07:08.586165  397302 start.go:83] releasing machines lock for "embed-certs-529341", held for 8.210294405s
	I1123 09:07:08.586241  397302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-529341
	I1123 09:07:08.606886  397302 ssh_runner.go:195] Run: cat /version.json
	I1123 09:07:08.606911  397302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:07:08.606944  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.606991  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.628561  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.629587  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.801920  397302 ssh_runner.go:195] Run: systemctl --version
	I1123 09:07:08.809058  397302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:07:08.852834  397302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:07:08.858091  397302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:07:08.858164  397302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:07:08.891180  397302 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:07:08.891206  397302 start.go:496] detecting cgroup driver to use...
	I1123 09:07:08.891238  397302 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:07:08.891290  397302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:07:08.911627  397302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:07:08.927032  397302 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:07:08.927094  397302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:07:08.948755  397302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:07:08.969165  397302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:07:09.074577  397302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:07:09.176497  397302 docker.go:234] disabling docker service ...
	I1123 09:07:09.176560  397302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:07:09.196209  397302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:07:09.209433  397302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:07:09.303280  397302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:07:09.408362  397302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:07:09.421625  397302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:07:09.439556  397302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:07:09.439623  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.454005  397302 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:07:09.454078  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.463814  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.473006  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.482337  397302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:07:09.491129  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.500233  397302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.514176  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.523455  397302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:07:09.531320  397302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:07:09.539204  397302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:09.623347  397302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:07:10.890742  397302 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.267355682s)
	I1123 09:07:10.890767  397302 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:07:10.890821  397302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:07:10.895053  397302 start.go:564] Will wait 60s for crictl version
	I1123 09:07:10.895110  397302 ssh_runner.go:195] Run: which crictl
	I1123 09:07:10.898897  397302 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:07:10.925045  397302 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:07:10.925137  397302 ssh_runner.go:195] Run: crio --version
	I1123 09:07:10.954258  397302 ssh_runner.go:195] Run: crio --version
	I1123 09:07:10.986015  397302 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:07:08.539891  401015 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:07:08.540128  401015 start.go:159] libmachine.API.Create for "default-k8s-diff-port-602386" (driver="docker")
	I1123 09:07:08.540165  401015 client.go:173] LocalClient.Create starting
	I1123 09:07:08.540239  401015 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem
	I1123 09:07:08.540277  401015 main.go:143] libmachine: Decoding PEM data...
	I1123 09:07:08.540300  401015 main.go:143] libmachine: Parsing certificate...
	I1123 09:07:08.540362  401015 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem
	I1123 09:07:08.540390  401015 main.go:143] libmachine: Decoding PEM data...
	I1123 09:07:08.540409  401015 main.go:143] libmachine: Parsing certificate...
	I1123 09:07:08.540731  401015 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-602386 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:07:08.559635  401015 cli_runner.go:211] docker network inspect default-k8s-diff-port-602386 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:07:08.559704  401015 network_create.go:284] running [docker network inspect default-k8s-diff-port-602386] to gather additional debugging logs...
	I1123 09:07:08.559722  401015 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-602386
	W1123 09:07:08.576376  401015 cli_runner.go:211] docker network inspect default-k8s-diff-port-602386 returned with exit code 1
	I1123 09:07:08.576403  401015 network_create.go:287] error running [docker network inspect default-k8s-diff-port-602386]: docker network inspect default-k8s-diff-port-602386: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-602386 not found
	I1123 09:07:08.576416  401015 network_create.go:289] output of [docker network inspect default-k8s-diff-port-602386]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-602386 not found
	
	** /stderr **
	I1123 09:07:08.576534  401015 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:07:08.599338  401015 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f35ea3fda0f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:67:c4:67:42:d0} reservation:<nil>}
	I1123 09:07:08.600239  401015 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b5718ee288aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:cf:46:ea:6c:f7} reservation:<nil>}
	I1123 09:07:08.601140  401015 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7539aab81c9c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:4a:40:12:17:c0} reservation:<nil>}
	I1123 09:07:08.601742  401015 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-76e5790841e8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:11:3d:17:90:c8} reservation:<nil>}
	I1123 09:07:08.602428  401015 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-40c67f27f792 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5a:47:fa:80:b9:69} reservation:<nil>}
	I1123 09:07:08.603433  401015 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dcd00}
	I1123 09:07:08.603463  401015 network_create.go:124] attempt to create docker network default-k8s-diff-port-602386 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1123 09:07:08.603519  401015 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 default-k8s-diff-port-602386
	I1123 09:07:08.664437  401015 network_create.go:108] docker network default-k8s-diff-port-602386 192.168.94.0/24 created
	I1123 09:07:08.664471  401015 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-602386" container
	I1123 09:07:08.664541  401015 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:07:08.685093  401015 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-602386 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:07:08.706493  401015 oci.go:103] Successfully created a docker volume default-k8s-diff-port-602386
	I1123 09:07:08.706567  401015 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-602386-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --entrypoint /usr/bin/test -v default-k8s-diff-port-602386:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:07:09.169250  401015 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-602386
	I1123 09:07:09.169313  401015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:09.169325  401015 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:07:09.169390  401015 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-602386:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:07:12.722960  401015 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-602386:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.553507067s)
	I1123 09:07:12.723018  401015 kic.go:203] duration metric: took 3.553690036s to extract preloaded images to volume ...
	W1123 09:07:12.723117  401015 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:07:12.723156  401015 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:07:12.723200  401015 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:07:12.790535  401015 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-602386 --name default-k8s-diff-port-602386 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --network default-k8s-diff-port-602386 --ip 192.168.94.2 --volume default-k8s-diff-port-602386:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:07:13.131535  401015 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Running}}
	I1123 09:07:13.152945  401015 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:07:13.177251  401015 cli_runner.go:164] Run: docker exec default-k8s-diff-port-602386 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:07:13.233908  401015 oci.go:144] the created container "default-k8s-diff-port-602386" has a running status.
	I1123 09:07:13.233944  401015 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa...
	I1123 09:07:13.284052  401015 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	
	
	==> CRI-O <==
	Nov 23 09:06:58 old-k8s-version-054094 crio[774]: time="2025-11-23T09:06:58.906333824Z" level=info msg="Starting container: 388433288d1e90e3d3579c8969793d5bf3c7b94894085f49332e41b7deb66cf2" id=a9a2efc8-05ac-4525-8d0b-caf5d1f1030a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:06:58 old-k8s-version-054094 crio[774]: time="2025-11-23T09:06:58.908959984Z" level=info msg="Started container" PID=2133 containerID=388433288d1e90e3d3579c8969793d5bf3c7b94894085f49332e41b7deb66cf2 description=kube-system/coredns-5dd5756b68-whp8m/coredns id=a9a2efc8-05ac-4525-8d0b-caf5d1f1030a name=/runtime.v1.RuntimeService/StartContainer sandboxID=ebb3108dd974468b64f1c675ffacb6c1718d0f2832b62cb2f2a8bd17fc351073
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.728298887Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e895b73a-d78b-486b-a1e4-4c4410a9da21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.728375522Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.734167421Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fecf991f9212883c713ab6b5aae963dbab1c53360bfa73b794d1217496def925 UID:45bf2904-a260-4a9c-9bb1-efedb8776977 NetNS:/var/run/netns/b7258aa1-3a00-4013-a946-e6a2c9614a33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009170e0}] Aliases:map[]}"
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.734199877Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.746244442Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:fecf991f9212883c713ab6b5aae963dbab1c53360bfa73b794d1217496def925 UID:45bf2904-a260-4a9c-9bb1-efedb8776977 NetNS:/var/run/netns/b7258aa1-3a00-4013-a946-e6a2c9614a33 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009170e0}] Aliases:map[]}"
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.746419123Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.747726463Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.749053457Z" level=info msg="Ran pod sandbox fecf991f9212883c713ab6b5aae963dbab1c53360bfa73b794d1217496def925 with infra container: default/busybox/POD" id=e895b73a-d78b-486b-a1e4-4c4410a9da21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.750361147Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f632eeb-f2cf-4757-9627-198c3947fbe7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.750490075Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0f632eeb-f2cf-4757-9627-198c3947fbe7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.750534889Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0f632eeb-f2cf-4757-9627-198c3947fbe7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.751143199Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=988708c2-3ce3-4c93-810b-da65e00c9325 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:01 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:01.754746028Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:07:05 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:05.884637801Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=988708c2-3ce3-4c93-810b-da65e00c9325 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:05 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:05.886282233Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=99f0ac79-bdec-48f6-8a31-a6ee17481af7 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:05 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:05.888074866Z" level=info msg="Creating container: default/busybox/busybox" id=6033cf96-d304-4bbb-8732-94d579831b9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:05 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:05.888261896Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:06 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:06.021668981Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:06 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:06.023089823Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:06 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:06.060452351Z" level=info msg="Created container 76dd838215b8cb5944ea1cc75a63b2c28612e931c2d5552dae3e65dde95a1f81: default/busybox/busybox" id=6033cf96-d304-4bbb-8732-94d579831b9c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:06 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:06.06138828Z" level=info msg="Starting container: 76dd838215b8cb5944ea1cc75a63b2c28612e931c2d5552dae3e65dde95a1f81" id=86cc5ea3-f479-4f0e-bc95-aaec8bea60dc name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:07:06 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:06.064763281Z" level=info msg="Started container" PID=2208 containerID=76dd838215b8cb5944ea1cc75a63b2c28612e931c2d5552dae3e65dde95a1f81 description=default/busybox/busybox id=86cc5ea3-f479-4f0e-bc95-aaec8bea60dc name=/runtime.v1.RuntimeService/StartContainer sandboxID=fecf991f9212883c713ab6b5aae963dbab1c53360bfa73b794d1217496def925
	Nov 23 09:07:12 old-k8s-version-054094 crio[774]: time="2025-11-23T09:07:12.515159156Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	76dd838215b8c       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   fecf991f92128       busybox                                          default
	388433288d1e9       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      15 seconds ago      Running             coredns                   0                   ebb3108dd9744       coredns-5dd5756b68-whp8m                         kube-system
	5f8be23fb8437       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      15 seconds ago      Running             storage-provisioner       0                   2ffff39562b62       storage-provisioner                              kube-system
	9dc258f2b8763       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    26 seconds ago      Running             kindnet-cni               0                   c28903981ede6       kindnet-fhw8w                                    kube-system
	cad5505a61f36       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      28 seconds ago      Running             kube-proxy                0                   388d54750c4e7       kube-proxy-9crnb                                 kube-system
	8eb5541836dc1       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      47 seconds ago      Running             kube-apiserver            0                   b13392c4bffba       kube-apiserver-old-k8s-version-054094            kube-system
	3d3cd0b0cf83c       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      47 seconds ago      Running             kube-controller-manager   0                   22b1ea098a2a6       kube-controller-manager-old-k8s-version-054094   kube-system
	6201e53c2d04e       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      47 seconds ago      Running             kube-scheduler            0                   af5a2def9d648       kube-scheduler-old-k8s-version-054094            kube-system
	f44522ba9a3e1       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      47 seconds ago      Running             etcd                      0                   ec6d532a42339       etcd-old-k8s-version-054094                      kube-system
	
	
	==> coredns [388433288d1e90e3d3579c8969793d5bf3c7b94894085f49332e41b7deb66cf2] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44594 - 33054 "HINFO IN 5610276582363429255.8803306868963344272. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029800469s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-054094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-054094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-054094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_06_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:06:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-054094
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:07:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:07:03 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:07:03 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:07:03 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:07:03 +0000   Sun, 23 Nov 2025 09:06:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-054094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                e0f1f612-a814-499c-889a-0902ab6fee2d
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         13s
	  kube-system                 coredns-5dd5756b68-whp8m                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-old-k8s-version-054094                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         42s
	  kube-system                 kindnet-fhw8w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-old-k8s-version-054094             250m (3%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-controller-manager-old-k8s-version-054094    200m (2%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 kube-proxy-9crnb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-old-k8s-version-054094             100m (1%)     0 (0%)      0 (0%)           0 (0%)         42s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  48s (x8 over 48s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    48s (x8 over 48s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     48s (x8 over 48s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  Starting                 42s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  42s                kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    42s                kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     42s                kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node old-k8s-version-054094 event: Registered Node old-k8s-version-054094 in Controller
	  Normal  NodeReady                16s                kubelet          Node old-k8s-version-054094 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [f44522ba9a3e1d5420f3d239cc098912cf09421d83da9a3460987981eb217191] <==
	{"level":"info","ts":"2025-11-23T09:06:29.345242Z","caller":"traceutil/trace.go:171","msg":"trace[746357127] transaction","detail":"{read_only:false; response_revision:14; number_of_response:1; }","duration":"252.516984ms","start":"2025-11-23T09:06:29.092685Z","end":"2025-11-23T09:06:29.345202Z","steps":["trace[746357127] 'process raft request'  (duration: 252.381935ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:06:29.345455Z","caller":"traceutil/trace.go:171","msg":"trace[857914135] transaction","detail":"{read_only:false; response_revision:15; number_of_response:1; }","duration":"252.675238ms","start":"2025-11-23T09:06:29.092774Z","end":"2025-11-23T09:06:29.345449Z","steps":["trace[857914135] 'process raft request'  (duration: 252.322042ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:06:29.345493Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"252.16053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T09:06:29.345533Z","caller":"traceutil/trace.go:171","msg":"trace[602952661] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:18; }","duration":"252.206944ms","start":"2025-11-23T09:06:29.093315Z","end":"2025-11-23T09:06:29.345522Z","steps":["trace[602952661] 'agreement among raft nodes before linearized reading'  (duration: 252.074513ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:06:29.345563Z","caller":"traceutil/trace.go:171","msg":"trace[2056582772] transaction","detail":"{read_only:false; number_of_response:0; response_revision:17; }","duration":"252.101468ms","start":"2025-11-23T09:06:29.093455Z","end":"2025-11-23T09:06:29.345557Z","steps":["trace[2056582772] 'process raft request'  (duration: 251.707911ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:06:29.345566Z","caller":"traceutil/trace.go:171","msg":"trace[440241747] transaction","detail":"{read_only:false; response_revision:17; number_of_response:1; }","duration":"252.328309ms","start":"2025-11-23T09:06:29.093227Z","end":"2025-11-23T09:06:29.345556Z","steps":["trace[440241747] 'process raft request'  (duration: 251.916014ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:06:29.345582Z","caller":"traceutil/trace.go:171","msg":"trace[1407388697] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"252.727353ms","start":"2025-11-23T09:06:29.092846Z","end":"2025-11-23T09:06:29.345573Z","steps":["trace[1407388697] 'process raft request'  (duration: 252.277038ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:06:29.345573Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.977946ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-054094\" ","response":"range_response_count:1 size:4217"}
	{"level":"info","ts":"2025-11-23T09:06:29.345647Z","caller":"traceutil/trace.go:171","msg":"trace[1570952673] range","detail":"{range_begin:/registry/minions/old-k8s-version-054094; range_end:; response_count:1; response_revision:18; }","duration":"254.0611ms","start":"2025-11-23T09:06:29.091579Z","end":"2025-11-23T09:06:29.34564Z","steps":["trace[1570952673] 'agreement among raft nodes before linearized reading'  (duration: 253.855769ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:06:29.357508Z","caller":"traceutil/trace.go:171","msg":"trace[1102563669] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"189.131277ms","start":"2025-11-23T09:06:29.168355Z","end":"2025-11-23T09:06:29.357486Z","steps":["trace[1102563669] 'process raft request'  (duration: 188.932314ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:06:29.357547Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.640599ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-23T09:06:29.357576Z","caller":"traceutil/trace.go:171","msg":"trace[250997189] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:0; response_revision:19; }","duration":"147.679088ms","start":"2025-11-23T09:06:29.209889Z","end":"2025-11-23T09:06:29.357568Z","steps":["trace[250997189] 'agreement among raft nodes before linearized reading'  (duration: 147.612505ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:06:29.357522Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"147.598224ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" ","response":"range_response_count:1 size:350"}
	{"level":"warn","ts":"2025-11-23T09:06:29.357583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"199.355004ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests/csr-lhxbh\" ","response":"range_response_count:1 size:895"}
	{"level":"info","ts":"2025-11-23T09:06:29.357618Z","caller":"traceutil/trace.go:171","msg":"trace[1045815568] range","detail":"{range_begin:/registry/certificatesigningrequests/csr-lhxbh; range_end:; response_count:1; response_revision:19; }","duration":"199.395098ms","start":"2025-11-23T09:06:29.158214Z","end":"2025-11-23T09:06:29.357609Z","steps":["trace[1045815568] 'agreement among raft nodes before linearized reading'  (duration: 199.317448ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:06:29.357654Z","caller":"traceutil/trace.go:171","msg":"trace[1911210316] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:19; }","duration":"147.703414ms","start":"2025-11-23T09:06:29.209901Z","end":"2025-11-23T09:06:29.357605Z","steps":["trace[1911210316] 'agreement among raft nodes before linearized reading'  (duration: 147.558317ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:06:31.565271Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"135.138944ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/disruption-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:06:31.565346Z","caller":"traceutil/trace.go:171","msg":"trace[1525225470] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/disruption-controller; range_end:; response_count:0; response_revision:248; }","duration":"135.234108ms","start":"2025-11-23T09:06:31.430101Z","end":"2025-11-23T09:06:31.565335Z","steps":["trace[1525225470] 'range keys from in-memory index tree'  (duration: 135.056495ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:06:31.926031Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"114.452502ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356836643334475 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/old-k8s-version-054094\" mod_revision:247 > success:<request_put:<key:\"/registry/minions/old-k8s-version-054094\" value_size:4909 >> failure:<request_range:<key:\"/registry/minions/old-k8s-version-054094\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-23T09:06:31.926104Z","caller":"traceutil/trace.go:171","msg":"trace[411365955] linearizableReadLoop","detail":"{readStateIndex:256; appliedIndex:255; }","duration":"246.598081ms","start":"2025-11-23T09:06:31.679493Z","end":"2025-11-23T09:06:31.926091Z","steps":["trace[411365955] 'read index received'  (duration: 131.960326ms)","trace[411365955] 'applied index is now lower than readState.Index'  (duration: 114.636333ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:06:31.926125Z","caller":"traceutil/trace.go:171","msg":"trace[777889080] transaction","detail":"{read_only:false; response_revision:250; number_of_response:1; }","duration":"289.062108ms","start":"2025-11-23T09:06:31.637041Z","end":"2025-11-23T09:06:31.926103Z","steps":["trace[777889080] 'process raft request'  (duration: 174.377551ms)","trace[777889080] 'compare'  (duration: 114.335455ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:06:31.926161Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"246.689457ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-23T09:06:31.926184Z","caller":"traceutil/trace.go:171","msg":"trace[1751668640] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:0; response_revision:250; }","duration":"246.712396ms","start":"2025-11-23T09:06:31.679464Z","end":"2025-11-23T09:06:31.926176Z","steps":["trace[1751668640] 'agreement among raft nodes before linearized reading'  (duration: 246.663175ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:07:10.710701Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"159.425685ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.76.2\" ","response":"range_response_count:1 size:131"}
	{"level":"info","ts":"2025-11-23T09:07:10.710821Z","caller":"traceutil/trace.go:171","msg":"trace[333262263] range","detail":"{range_begin:/registry/masterleases/192.168.76.2; range_end:; response_count:1; response_revision:458; }","duration":"159.555166ms","start":"2025-11-23T09:07:10.551243Z","end":"2025-11-23T09:07:10.710798Z","steps":["trace[333262263] 'range keys from in-memory index tree'  (duration: 159.284079ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:07:14 up  1:49,  0 user,  load average: 4.57, 3.87, 2.51
	Linux old-k8s-version-054094 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [9dc258f2b87636fcd89351ee915c7b7f3fed8f084c4d2785b7a186f118e7fd82] <==
	I1123 09:06:47.905493       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:06:47.905822       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:06:47.906019       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:06:47.906039       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:06:47.906070       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:06:48Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:06:48.111690       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:06:48.111747       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:06:48.111763       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:06:48.112242       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:06:48.606060       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:06:48.606100       1 metrics.go:72] Registering metrics
	I1123 09:06:48.606212       1 controller.go:711] "Syncing nftables rules"
	I1123 09:06:58.120062       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:06:58.120109       1 main.go:301] handling current node
	I1123 09:07:08.112090       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:07:08.112142       1 main.go:301] handling current node
	
	
	==> kube-apiserver [8eb5541836dc10beddbd5e48b6797402e28e6a8facf73fce6389e3a0162cfb4b] <==
	I1123 09:06:28.909190       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 09:06:28.909214       1 aggregator.go:166] initial CRD sync complete...
	I1123 09:06:28.909225       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 09:06:28.909231       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:06:28.909237       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:06:28.912392       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:06:28.966106       1 shared_informer.go:318] Caches are synced for node_authorizer
	E1123 09:06:29.152678       1 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	E1123 09:06:29.152742       1 controller.go:145] while syncing ConfigMap "kube-system/kube-apiserver-legacy-service-account-token-tracking", err: namespaces "kube-system" not found
	I1123 09:06:29.382980       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:06:29.810399       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:06:29.814173       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:06:29.814256       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:06:30.397882       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:06:30.450298       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:06:30.557522       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:06:30.565817       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 09:06:30.567395       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 09:06:30.573905       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:06:30.883746       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 09:06:32.184631       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 09:06:32.198036       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:06:32.210501       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1123 09:06:44.615442       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1123 09:06:44.714426       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [3d3cd0b0cf83cee570eb93d9e6b43fda13f0a436e63a773663d2199fb5edf1ad] <==
	I1123 09:06:44.035383       1 shared_informer.go:318] Caches are synced for stateful set
	I1123 09:06:44.066791       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 09:06:44.070940       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 09:06:44.398619       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:06:44.431863       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:06:44.431903       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 09:06:44.619448       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1123 09:06:44.724308       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9crnb"
	I1123 09:06:44.726202       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fhw8w"
	I1123 09:06:44.871059       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wffgm"
	I1123 09:06:44.878516       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-whp8m"
	I1123 09:06:44.890409       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="271.135273ms"
	I1123 09:06:44.897650       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.178359ms"
	I1123 09:06:44.898493       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="103.092µs"
	I1123 09:06:45.424387       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1123 09:06:45.435167       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-wffgm"
	I1123 09:06:45.442767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.654235ms"
	I1123 09:06:45.449716       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.871796ms"
	I1123 09:06:45.449862       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.078µs"
	I1123 09:06:58.544602       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="109.485µs"
	I1123 09:06:58.555323       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="134.637µs"
	I1123 09:06:58.817200       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1123 09:06:59.382530       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="123.234µs"
	I1123 09:06:59.405166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.766424ms"
	I1123 09:06:59.405307       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="85.838µs"
	
	
	==> kube-proxy [cad5505a61f366028a113f4ec8fc612d57d2a97241589e941d1101521b99c326] <==
	I1123 09:06:45.190874       1 server_others.go:69] "Using iptables proxy"
	I1123 09:06:45.206544       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 09:06:45.287677       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:06:45.291337       1 server_others.go:152] "Using iptables Proxier"
	I1123 09:06:45.291383       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 09:06:45.291393       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 09:06:45.291640       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 09:06:45.291947       1 server.go:846] "Version info" version="v1.28.0"
	I1123 09:06:45.292031       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:06:45.297418       1 config.go:188] "Starting service config controller"
	I1123 09:06:45.297462       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 09:06:45.297493       1 config.go:97] "Starting endpoint slice config controller"
	I1123 09:06:45.297499       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 09:06:45.297838       1 config.go:315] "Starting node config controller"
	I1123 09:06:45.297894       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 09:06:45.397792       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1123 09:06:45.397866       1 shared_informer.go:318] Caches are synced for service config
	I1123 09:06:45.398021       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [6201e53c2d04ef0a00f6ad3131f2a1361ef494ade171923b428c00372b24172b] <==
	E1123 09:06:28.884673       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1123 09:06:28.884752       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1123 09:06:28.884768       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1123 09:06:28.884812       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 09:06:28.884831       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 09:06:28.884898       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1123 09:06:28.884908       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1123 09:06:28.884923       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 09:06:28.884939       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1123 09:06:29.727470       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1123 09:06:29.727502       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1123 09:06:29.973892       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1123 09:06:29.973932       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1123 09:06:30.055892       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1123 09:06:30.056187       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:06:30.057234       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1123 09:06:30.057452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1123 09:06:30.153819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 09:06:30.153819       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 09:06:30.154405       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1123 09:06:30.154236       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1123 09:06:30.204290       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1123 09:06:30.204329       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1123 09:06:31.781272       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 09:06:43 old-k8s-version-054094 kubelet[1393]: I1123 09:06:43.952729    1393 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.729991    1393 topology_manager.go:215] "Topology Admit Handler" podUID="f960df2c-e6e2-469b-84d0-1313f050c423" podNamespace="kube-system" podName="kube-proxy-9crnb"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.731337    1393 topology_manager.go:215] "Topology Admit Handler" podUID="bc24d3dc-419d-4f46-95fd-ee08945db06b" podNamespace="kube-system" podName="kindnet-fhw8w"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843313    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f960df2c-e6e2-469b-84d0-1313f050c423-kube-proxy\") pod \"kube-proxy-9crnb\" (UID: \"f960df2c-e6e2-469b-84d0-1313f050c423\") " pod="kube-system/kube-proxy-9crnb"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843379    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bc24d3dc-419d-4f46-95fd-ee08945db06b-cni-cfg\") pod \"kindnet-fhw8w\" (UID: \"bc24d3dc-419d-4f46-95fd-ee08945db06b\") " pod="kube-system/kindnet-fhw8w"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843415    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bc24d3dc-419d-4f46-95fd-ee08945db06b-lib-modules\") pod \"kindnet-fhw8w\" (UID: \"bc24d3dc-419d-4f46-95fd-ee08945db06b\") " pod="kube-system/kindnet-fhw8w"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843447    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f960df2c-e6e2-469b-84d0-1313f050c423-lib-modules\") pod \"kube-proxy-9crnb\" (UID: \"f960df2c-e6e2-469b-84d0-1313f050c423\") " pod="kube-system/kube-proxy-9crnb"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843480    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bc24d3dc-419d-4f46-95fd-ee08945db06b-xtables-lock\") pod \"kindnet-fhw8w\" (UID: \"bc24d3dc-419d-4f46-95fd-ee08945db06b\") " pod="kube-system/kindnet-fhw8w"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843524    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr7h7\" (UniqueName: \"kubernetes.io/projected/bc24d3dc-419d-4f46-95fd-ee08945db06b-kube-api-access-kr7h7\") pod \"kindnet-fhw8w\" (UID: \"bc24d3dc-419d-4f46-95fd-ee08945db06b\") " pod="kube-system/kindnet-fhw8w"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843557    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zj5ps\" (UniqueName: \"kubernetes.io/projected/f960df2c-e6e2-469b-84d0-1313f050c423-kube-api-access-zj5ps\") pod \"kube-proxy-9crnb\" (UID: \"f960df2c-e6e2-469b-84d0-1313f050c423\") " pod="kube-system/kube-proxy-9crnb"
	Nov 23 09:06:44 old-k8s-version-054094 kubelet[1393]: I1123 09:06:44.843608    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f960df2c-e6e2-469b-84d0-1313f050c423-xtables-lock\") pod \"kube-proxy-9crnb\" (UID: \"f960df2c-e6e2-469b-84d0-1313f050c423\") " pod="kube-system/kube-proxy-9crnb"
	Nov 23 09:06:48 old-k8s-version-054094 kubelet[1393]: I1123 09:06:48.360769    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9crnb" podStartSLOduration=4.360710903 podCreationTimestamp="2025-11-23 09:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:06:45.347137867 +0000 UTC m=+13.186461378" watchObservedRunningTime="2025-11-23 09:06:48.360710903 +0000 UTC m=+16.200034410"
	Nov 23 09:06:48 old-k8s-version-054094 kubelet[1393]: I1123 09:06:48.360935    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-fhw8w" podStartSLOduration=1.8218713100000001 podCreationTimestamp="2025-11-23 09:06:44 +0000 UTC" firstStartedPulling="2025-11-23 09:06:45.0461484 +0000 UTC m=+12.885471891" lastFinishedPulling="2025-11-23 09:06:47.585184229 +0000 UTC m=+15.424507721" observedRunningTime="2025-11-23 09:06:48.360477662 +0000 UTC m=+16.199801159" watchObservedRunningTime="2025-11-23 09:06:48.36090714 +0000 UTC m=+16.200230638"
	Nov 23 09:06:58 old-k8s-version-054094 kubelet[1393]: I1123 09:06:58.514099    1393 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 23 09:06:58 old-k8s-version-054094 kubelet[1393]: I1123 09:06:58.544830    1393 topology_manager.go:215] "Topology Admit Handler" podUID="98d558c0-e6d2-496d-929c-2952167f67b1" podNamespace="kube-system" podName="coredns-5dd5756b68-whp8m"
	Nov 23 09:06:58 old-k8s-version-054094 kubelet[1393]: I1123 09:06:58.545716    1393 topology_manager.go:215] "Topology Admit Handler" podUID="a6567013-443f-4d54-9e70-6c21199c73bc" podNamespace="kube-system" podName="storage-provisioner"
	Nov 23 09:06:58 old-k8s-version-054094 kubelet[1393]: I1123 09:06:58.644768    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9lds\" (UniqueName: \"kubernetes.io/projected/98d558c0-e6d2-496d-929c-2952167f67b1-kube-api-access-m9lds\") pod \"coredns-5dd5756b68-whp8m\" (UID: \"98d558c0-e6d2-496d-929c-2952167f67b1\") " pod="kube-system/coredns-5dd5756b68-whp8m"
	Nov 23 09:06:58 old-k8s-version-054094 kubelet[1393]: I1123 09:06:58.644812    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxwmx\" (UniqueName: \"kubernetes.io/projected/a6567013-443f-4d54-9e70-6c21199c73bc-kube-api-access-gxwmx\") pod \"storage-provisioner\" (UID: \"a6567013-443f-4d54-9e70-6c21199c73bc\") " pod="kube-system/storage-provisioner"
	Nov 23 09:06:58 old-k8s-version-054094 kubelet[1393]: I1123 09:06:58.644840    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98d558c0-e6d2-496d-929c-2952167f67b1-config-volume\") pod \"coredns-5dd5756b68-whp8m\" (UID: \"98d558c0-e6d2-496d-929c-2952167f67b1\") " pod="kube-system/coredns-5dd5756b68-whp8m"
	Nov 23 09:06:58 old-k8s-version-054094 kubelet[1393]: I1123 09:06:58.644861    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a6567013-443f-4d54-9e70-6c21199c73bc-tmp\") pod \"storage-provisioner\" (UID: \"a6567013-443f-4d54-9e70-6c21199c73bc\") " pod="kube-system/storage-provisioner"
	Nov 23 09:06:59 old-k8s-version-054094 kubelet[1393]: I1123 09:06:59.382418    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-whp8m" podStartSLOduration=15.382368052 podCreationTimestamp="2025-11-23 09:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:06:59.382273283 +0000 UTC m=+27.221596782" watchObservedRunningTime="2025-11-23 09:06:59.382368052 +0000 UTC m=+27.221691548"
	Nov 23 09:06:59 old-k8s-version-054094 kubelet[1393]: I1123 09:06:59.413703    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=14.413617022 podCreationTimestamp="2025-11-23 09:06:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:06:59.412686 +0000 UTC m=+27.252009497" watchObservedRunningTime="2025-11-23 09:06:59.413617022 +0000 UTC m=+27.252940521"
	Nov 23 09:07:01 old-k8s-version-054094 kubelet[1393]: I1123 09:07:01.425779    1393 topology_manager.go:215] "Topology Admit Handler" podUID="45bf2904-a260-4a9c-9bb1-efedb8776977" podNamespace="default" podName="busybox"
	Nov 23 09:07:01 old-k8s-version-054094 kubelet[1393]: I1123 09:07:01.464025    1393 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tcj4f\" (UniqueName: \"kubernetes.io/projected/45bf2904-a260-4a9c-9bb1-efedb8776977-kube-api-access-tcj4f\") pod \"busybox\" (UID: \"45bf2904-a260-4a9c-9bb1-efedb8776977\") " pod="default/busybox"
	Nov 23 09:07:06 old-k8s-version-054094 kubelet[1393]: I1123 09:07:06.432146    1393 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.297769207 podCreationTimestamp="2025-11-23 09:07:01 +0000 UTC" firstStartedPulling="2025-11-23 09:07:01.750739859 +0000 UTC m=+29.590063348" lastFinishedPulling="2025-11-23 09:07:05.885052796 +0000 UTC m=+33.724376286" observedRunningTime="2025-11-23 09:07:06.431943671 +0000 UTC m=+34.271267169" watchObservedRunningTime="2025-11-23 09:07:06.432082145 +0000 UTC m=+34.271405642"
	
	
	==> storage-provisioner [5f8be23fb84371a9026a99d3767ae21f3f3b7ac5187edc57be0d47bc4d0c548d] <==
	I1123 09:06:58.920446       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:06:58.933482       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:06:58.933639       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 09:06:58.940847       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:06:58.941166       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054094_412cf41a-2ff2-4ca8-9711-b2b68929d4e9!
	I1123 09:06:58.941224       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3dec40d-0ff5-42c0-b2b8-e87a7b713465", APIVersion:"v1", ResourceVersion:"429", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-054094_412cf41a-2ff2-4ca8-9711-b2b68929d4e9 became leader
	I1123 09:06:59.042352       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054094_412cf41a-2ff2-4ca8-9711-b2b68929d4e9!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054094 -n old-k8s-version-054094
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-054094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (248.725568ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:07:28Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-619589 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-619589 describe deploy/metrics-server -n kube-system: exit status 1 (57.550877ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-619589 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-619589
helpers_test.go:243: (dbg) docker inspect no-preload-619589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328",
	        "Created": "2025-11-23T09:06:25.102316496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 385516,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:06:25.140465346Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/hostname",
	        "HostsPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/hosts",
	        "LogPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328-json.log",
	        "Name": "/no-preload-619589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-619589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-619589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328",
	                "LowerDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-619589",
	                "Source": "/var/lib/docker/volumes/no-preload-619589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-619589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-619589",
	                "name.minikube.sigs.k8s.io": "no-preload-619589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "094a39fbd9c4246d83572ba639ebf1367c05fad4172ee1f052e0f4525e74336c",
	            "SandboxKey": "/var/run/docker/netns/094a39fbd9c4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-619589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40c67f27f7925004fe92866c39e8b5aa93f9532071ca8f095a0bf7fb3ffde5bf",
	                    "EndpointID": "d521cc9152339ac05821f5bc25fb7264653e676746f118fcc1fc493c7ccdb8cb",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "76:f1:2f:30:23:c0",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-619589",
	                        "75a170393553"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-619589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-619589 logs -n 25: (1.111616186s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-741183 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │ 23 Nov 25 09:06 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo docker system info                                                                                                                                 │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:06 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cri-dockerd --version                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo containerd config dump                                                                                                                             │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo crio config                                                                                                                                        │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p bridge-741183                                                                                                                                                         │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                          │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:07:08
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:07:08.320858  401015 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:07:08.320987  401015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:08.320998  401015 out.go:374] Setting ErrFile to fd 2...
	I1123 09:07:08.321005  401015 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:08.321255  401015 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:07:08.321772  401015 out.go:368] Setting JSON to false
	I1123 09:07:08.323156  401015 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6568,"bootTime":1763882260,"procs":322,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:07:08.323224  401015 start.go:143] virtualization: kvm guest
	I1123 09:07:08.325128  401015 out.go:179] * [default-k8s-diff-port-602386] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:07:08.327865  401015 notify.go:221] Checking for updates...
	I1123 09:07:08.327890  401015 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:07:08.329123  401015 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:07:08.330266  401015 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:07:08.331594  401015 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:07:08.332728  401015 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:07:08.333949  401015 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:07:08.335800  401015 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:08.335979  401015 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:08.336118  401015 config.go:182] Loaded profile config "old-k8s-version-054094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 09:07:08.336251  401015 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:07:08.361501  401015 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:07:08.361678  401015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:08.434623  401015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:07:08.421119525 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:08.434832  401015 docker.go:319] overlay module found
	I1123 09:07:08.436923  401015 out.go:179] * Using the docker driver based on user configuration
	I1123 09:07:08.438018  401015 start.go:309] selected driver: docker
	I1123 09:07:08.438033  401015 start.go:927] validating driver "docker" against <nil>
	I1123 09:07:08.438053  401015 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:07:08.438550  401015 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:08.506899  401015 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:07:08.495378652 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:08.507109  401015 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 09:07:08.507315  401015 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:08.509202  401015 out.go:179] * Using Docker driver with root privileges
	I1123 09:07:08.510308  401015 cni.go:84] Creating CNI manager for ""
	I1123 09:07:08.510389  401015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:08.510404  401015 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:07:08.510472  401015 start.go:353] cluster config:
	{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:08.511714  401015 out.go:179] * Starting "default-k8s-diff-port-602386" primary control-plane node in "default-k8s-diff-port-602386" cluster
	I1123 09:07:08.512736  401015 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:07:08.513843  401015 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:07:08.514862  401015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:08.514894  401015 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:07:08.514903  401015 cache.go:65] Caching tarball of preloaded images
	I1123 09:07:08.514950  401015 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:07:08.515008  401015 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:07:08.515024  401015 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:07:08.515120  401015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json ...
	I1123 09:07:08.515148  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json: {Name:mk8f3b6ec1fd2a4559a32a2a474b74464c0f0ecb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:08.537540  401015 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:07:08.537562  401015 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:07:08.537583  401015 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:07:08.537631  401015 start.go:360] acquireMachinesLock for default-k8s-diff-port-602386: {Name:mk936d882fdf1c8707634b4555fdb3d8130ce5fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:08.537742  401015 start.go:364] duration metric: took 92.278µs to acquireMachinesLock for "default-k8s-diff-port-602386"
	I1123 09:07:08.537772  401015 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:07:08.537865  401015 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:07:04.372774  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	W1123 09:07:06.452666  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	W1123 09:07:08.872555  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	I1123 09:07:06.045023  397302 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-529341:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.017097372s)
	I1123 09:07:06.045071  397302 kic.go:203] duration metric: took 5.017240686s to extract preloaded images to volume ...
	W1123 09:07:06.045164  397302 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:07:06.045212  397302 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:07:06.045265  397302 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:07:06.129340  397302 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-529341 --name embed-certs-529341 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-529341 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-529341 --network embed-certs-529341 --ip 192.168.103.2 --volume embed-certs-529341:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:07:06.753057  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Running}}
	I1123 09:07:06.772960  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:06.792229  397302 cli_runner.go:164] Run: docker exec embed-certs-529341 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:07:06.841376  397302 oci.go:144] the created container "embed-certs-529341" has a running status.
	I1123 09:07:06.841415  397302 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa...
	I1123 09:07:06.950550  397302 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:07:06.985457  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:07.004386  397302 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:07:07.004413  397302 kic_runner.go:114] Args: [docker exec --privileged embed-certs-529341 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:07:07.067651  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:07.091716  397302 machine.go:94] provisionDockerMachine start ...
	I1123 09:07:07.091860  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.116021  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.116602  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.116650  397302 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:07:07.270199  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-529341
	
	I1123 09:07:07.270227  397302 ubuntu.go:182] provisioning hostname "embed-certs-529341"
	I1123 09:07:07.270296  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.291192  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.291472  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.291495  397302 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-529341 && echo "embed-certs-529341" | sudo tee /etc/hostname
	I1123 09:07:07.466762  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-529341
	
	I1123 09:07:07.466851  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.489355  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.489763  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.489790  397302 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-529341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-529341/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-529341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:07:07.666676  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:07:07.666712  397302 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:07:07.666734  397302 ubuntu.go:190] setting up certificates
	I1123 09:07:07.666753  397302 provision.go:84] configureAuth start
	I1123 09:07:07.666807  397302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-529341
	I1123 09:07:07.685513  397302 provision.go:143] copyHostCerts
	I1123 09:07:07.685579  397302 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:07:07.685599  397302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:07:07.685661  397302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:07:07.685768  397302 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:07:07.685779  397302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:07:07.685813  397302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:07:07.685885  397302 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:07:07.685896  397302 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:07:07.685925  397302 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:07:07.686016  397302 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.embed-certs-529341 san=[127.0.0.1 192.168.103.2 embed-certs-529341 localhost minikube]
	I1123 09:07:07.726091  397302 provision.go:177] copyRemoteCerts
	I1123 09:07:07.726152  397302 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:07:07.726193  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.746067  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:07.848621  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:07:07.877290  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:07:07.897030  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:07:07.938111  397302 provision.go:87] duration metric: took 271.341725ms to configureAuth
	I1123 09:07:07.938137  397302 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:07:07.938281  397302 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:07.938378  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:07.956668  397302 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:07.956913  397302 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33098 <nil> <nil>}
	I1123 09:07:07.956932  397302 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:07:08.253272  397302 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:07:08.253301  397302 machine.go:97] duration metric: took 1.161555751s to provisionDockerMachine
	I1123 09:07:08.253313  397302 client.go:176] duration metric: took 7.87499223s to LocalClient.Create
	I1123 09:07:08.253326  397302 start.go:167] duration metric: took 7.875051823s to libmachine.API.Create "embed-certs-529341"
	I1123 09:07:08.253333  397302 start.go:293] postStartSetup for "embed-certs-529341" (driver="docker")
	I1123 09:07:08.253342  397302 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:07:08.253399  397302 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:07:08.253442  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.273300  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.380260  397302 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:07:08.385526  397302 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:07:08.385574  397302 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:07:08.385588  397302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:07:08.385661  397302 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:07:08.385758  397302 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:07:08.385892  397302 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:07:08.397303  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:07:08.426991  397302 start.go:296] duration metric: took 173.642066ms for postStartSetup
	I1123 09:07:08.427415  397302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-529341
	I1123 09:07:08.450325  397302 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/config.json ...
	I1123 09:07:08.450700  397302 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:07:08.450761  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.472787  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.581526  397302 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:07:08.586140  397302 start.go:128] duration metric: took 8.21011541s to createHost
	I1123 09:07:08.586165  397302 start.go:83] releasing machines lock for "embed-certs-529341", held for 8.210294405s
	I1123 09:07:08.586241  397302 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-529341
	I1123 09:07:08.606886  397302 ssh_runner.go:195] Run: cat /version.json
	I1123 09:07:08.606911  397302 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:07:08.606944  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.606991  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:08.628561  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.629587  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:08.801920  397302 ssh_runner.go:195] Run: systemctl --version
	I1123 09:07:08.809058  397302 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:07:08.852834  397302 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:07:08.858091  397302 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:07:08.858164  397302 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:07:08.891180  397302 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:07:08.891206  397302 start.go:496] detecting cgroup driver to use...
	I1123 09:07:08.891238  397302 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:07:08.891290  397302 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:07:08.911627  397302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:07:08.927032  397302 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:07:08.927094  397302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:07:08.948755  397302 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:07:08.969165  397302 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:07:09.074577  397302 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:07:09.176497  397302 docker.go:234] disabling docker service ...
	I1123 09:07:09.176560  397302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:07:09.196209  397302 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:07:09.209433  397302 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:07:09.303280  397302 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:07:09.408362  397302 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:07:09.421625  397302 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:07:09.439556  397302 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:07:09.439623  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.454005  397302 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:07:09.454078  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.463814  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.473006  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.482337  397302 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:07:09.491129  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.500233  397302 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.514176  397302 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:09.523455  397302 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:07:09.531320  397302 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:07:09.539204  397302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:09.623347  397302 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:07:10.890742  397302 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.267355682s)
	I1123 09:07:10.890767  397302 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:07:10.890821  397302 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:07:10.895053  397302 start.go:564] Will wait 60s for crictl version
	I1123 09:07:10.895110  397302 ssh_runner.go:195] Run: which crictl
	I1123 09:07:10.898897  397302 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:07:10.925045  397302 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:07:10.925137  397302 ssh_runner.go:195] Run: crio --version
	I1123 09:07:10.954258  397302 ssh_runner.go:195] Run: crio --version
	I1123 09:07:10.986015  397302 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:07:08.539891  401015 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:07:08.540128  401015 start.go:159] libmachine.API.Create for "default-k8s-diff-port-602386" (driver="docker")
	I1123 09:07:08.540165  401015 client.go:173] LocalClient.Create starting
	I1123 09:07:08.540239  401015 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem
	I1123 09:07:08.540277  401015 main.go:143] libmachine: Decoding PEM data...
	I1123 09:07:08.540300  401015 main.go:143] libmachine: Parsing certificate...
	I1123 09:07:08.540362  401015 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem
	I1123 09:07:08.540390  401015 main.go:143] libmachine: Decoding PEM data...
	I1123 09:07:08.540409  401015 main.go:143] libmachine: Parsing certificate...
	I1123 09:07:08.540731  401015 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-602386 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:07:08.559635  401015 cli_runner.go:211] docker network inspect default-k8s-diff-port-602386 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:07:08.559704  401015 network_create.go:284] running [docker network inspect default-k8s-diff-port-602386] to gather additional debugging logs...
	I1123 09:07:08.559722  401015 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-602386
	W1123 09:07:08.576376  401015 cli_runner.go:211] docker network inspect default-k8s-diff-port-602386 returned with exit code 1
	I1123 09:07:08.576403  401015 network_create.go:287] error running [docker network inspect default-k8s-diff-port-602386]: docker network inspect default-k8s-diff-port-602386: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-602386 not found
	I1123 09:07:08.576416  401015 network_create.go:289] output of [docker network inspect default-k8s-diff-port-602386]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-602386 not found
	
	** /stderr **
	I1123 09:07:08.576534  401015 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:07:08.599338  401015 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f35ea3fda0f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:67:c4:67:42:d0} reservation:<nil>}
	I1123 09:07:08.600239  401015 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b5718ee288aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:cf:46:ea:6c:f7} reservation:<nil>}
	I1123 09:07:08.601140  401015 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7539aab81c9c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:4a:40:12:17:c0} reservation:<nil>}
	I1123 09:07:08.601742  401015 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-76e5790841e8 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:06:11:3d:17:90:c8} reservation:<nil>}
	I1123 09:07:08.602428  401015 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-40c67f27f792 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:5a:47:fa:80:b9:69} reservation:<nil>}
	I1123 09:07:08.603433  401015 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002dcd00}
	I1123 09:07:08.603463  401015 network_create.go:124] attempt to create docker network default-k8s-diff-port-602386 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1123 09:07:08.603519  401015 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 default-k8s-diff-port-602386
	I1123 09:07:08.664437  401015 network_create.go:108] docker network default-k8s-diff-port-602386 192.168.94.0/24 created
	I1123 09:07:08.664471  401015 kic.go:121] calculated static IP "192.168.94.2" for the "default-k8s-diff-port-602386" container
	I1123 09:07:08.664541  401015 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:07:08.685093  401015 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-602386 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:07:08.706493  401015 oci.go:103] Successfully created a docker volume default-k8s-diff-port-602386
	I1123 09:07:08.706567  401015 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-602386-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --entrypoint /usr/bin/test -v default-k8s-diff-port-602386:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:07:09.169250  401015 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-602386
	I1123 09:07:09.169313  401015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:09.169325  401015 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:07:09.169390  401015 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-602386:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1123 09:07:12.722960  401015 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-602386:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (3.553507067s)
	I1123 09:07:12.723018  401015 kic.go:203] duration metric: took 3.553690036s to extract preloaded images to volume ...
	W1123 09:07:12.723117  401015 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:07:12.723156  401015 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:07:12.723200  401015 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:07:12.790535  401015 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-602386 --name default-k8s-diff-port-602386 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-602386 --network default-k8s-diff-port-602386 --ip 192.168.94.2 --volume default-k8s-diff-port-602386:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:07:13.131535  401015 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Running}}
	I1123 09:07:13.152945  401015 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:07:13.177251  401015 cli_runner.go:164] Run: docker exec default-k8s-diff-port-602386 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:07:13.233908  401015 oci.go:144] the created container "default-k8s-diff-port-602386" has a running status.
	I1123 09:07:13.233944  401015 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa...
	I1123 09:07:13.284052  401015 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	W1123 09:07:10.872906  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	W1123 09:07:12.873239  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	I1123 09:07:10.987223  397302 cli_runner.go:164] Run: docker network inspect embed-certs-529341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:07:11.006792  397302 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 09:07:11.011399  397302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:07:11.022499  397302 kubeadm.go:884] updating cluster {Name:embed-certs-529341 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePat
h: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:07:11.022673  397302 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:11.022745  397302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:07:11.056805  397302 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:07:11.056830  397302 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:07:11.056892  397302 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:07:11.089318  397302 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:07:11.089342  397302 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:07:11.089350  397302 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 09:07:11.089451  397302 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-529341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:07:11.089515  397302 ssh_runner.go:195] Run: crio config
	I1123 09:07:11.149263  397302 cni.go:84] Creating CNI manager for ""
	I1123 09:07:11.149287  397302 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:11.149311  397302 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:07:11.149344  397302 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-529341 NodeName:embed-certs-529341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:07:11.149505  397302 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-529341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:07:11.149589  397302 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:07:11.157987  397302 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:07:11.158065  397302 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:07:11.166624  397302 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 09:07:11.180022  397302 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:07:11.196072  397302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 09:07:11.209241  397302 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:07:11.213073  397302 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:07:11.223455  397302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:11.311451  397302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:07:11.337825  397302 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341 for IP: 192.168.103.2
	I1123 09:07:11.337852  397302 certs.go:195] generating shared ca certs ...
	I1123 09:07:11.337874  397302 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:11.338078  397302 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:07:11.338144  397302 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:07:11.338157  397302 certs.go:257] generating profile certs ...
	I1123 09:07:11.338237  397302 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.key
	I1123 09:07:11.338260  397302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.crt with IP's: []
	I1123 09:07:11.432822  397302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.crt ...
	I1123 09:07:11.432846  397302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.crt: {Name:mk5b3b644b94da73003dcd11b5bbf8aadac5a3a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:11.433052  397302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.key ...
	I1123 09:07:11.433068  397302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.key: {Name:mkd9084383025926c50242c726c29231db648945 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:11.433163  397302 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key.ad13d260
	I1123 09:07:11.433180  397302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt.ad13d260 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1123 09:07:11.486645  397302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt.ad13d260 ...
	I1123 09:07:11.486696  397302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt.ad13d260: {Name:mkc9b997aad49fb2d9ed2ddcc68a22cfa67d38f9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:11.486863  397302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key.ad13d260 ...
	I1123 09:07:11.486877  397302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key.ad13d260: {Name:mk6a5ce76543ea23995848a74982d605f17ee9e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:11.486955  397302 certs.go:382] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt.ad13d260 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt
	I1123 09:07:11.487047  397302 certs.go:386] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key.ad13d260 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key
	I1123 09:07:11.487105  397302 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key
	I1123 09:07:11.487120  397302 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.crt with IP's: []
	I1123 09:07:11.513755  397302 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.crt ...
	I1123 09:07:11.513794  397302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.crt: {Name:mkcca2629d70cd3d3a97dadcec63a64c6fa9fa80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:11.514026  397302 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key ...
	I1123 09:07:11.514049  397302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key: {Name:mk09d79b0822ce46a0a6a00fd47383776f395b58 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:11.514300  397302 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:07:11.514354  397302 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:07:11.514368  397302 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:07:11.514405  397302 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:07:11.514437  397302 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:07:11.514470  397302 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:07:11.514527  397302 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:07:11.515299  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:07:11.533591  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:07:11.551717  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:07:11.572084  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:07:11.608817  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 09:07:11.635193  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:07:11.654842  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:07:11.673894  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:07:11.692744  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:07:11.790445  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:07:11.809288  397302 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:07:11.826695  397302 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:07:11.839496  397302 ssh_runner.go:195] Run: openssl version
	I1123 09:07:11.845701  397302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:07:11.854419  397302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:11.858307  397302 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:11.858369  397302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:11.893944  397302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:07:11.903158  397302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:07:11.911598  397302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:07:11.915419  397302 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:07:11.915468  397302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:07:11.951499  397302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:07:11.960556  397302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:07:11.969343  397302 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:07:11.973166  397302 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:07:11.973280  397302 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:07:12.008195  397302 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:07:12.017181  397302 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:07:12.021258  397302 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:07:12.021330  397302 kubeadm.go:401] StartCluster: {Name:embed-certs-529341 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:12.021406  397302 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:07:12.021464  397302 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:07:12.050447  397302 cri.go:89] found id: ""
	I1123 09:07:12.050542  397302 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:07:12.059082  397302 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:07:12.067451  397302 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:07:12.067509  397302 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:07:12.076521  397302 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:07:12.076540  397302 kubeadm.go:158] found existing configuration files:
	
	I1123 09:07:12.076582  397302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:07:12.085853  397302 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:07:12.085946  397302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:07:12.093778  397302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:07:12.102990  397302 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:07:12.103056  397302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:07:12.110889  397302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:07:12.118847  397302 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:07:12.118907  397302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:07:12.127293  397302 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:07:12.135391  397302 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:07:12.135445  397302 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:07:12.143092  397302 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:07:12.201669  397302 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:07:12.262328  397302 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:07:13.324400  401015 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:07:13.349723  401015 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:07:13.349753  401015 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-602386 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:07:13.422042  401015 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:07:13.446288  401015 machine.go:94] provisionDockerMachine start ...
	I1123 09:07:13.446374  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:13.468364  401015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:13.468746  401015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1123 09:07:13.468767  401015 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:07:13.470089  401015 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:33284->127.0.0.1:33103: read: connection reset by peer
	I1123 09:07:16.619272  401015 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-602386
	
	I1123 09:07:16.619300  401015 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-602386"
	I1123 09:07:16.619368  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:16.637322  401015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:16.637643  401015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1123 09:07:16.637667  401015 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-602386 && echo "default-k8s-diff-port-602386" | sudo tee /etc/hostname
	I1123 09:07:16.791847  401015 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-602386
	
	I1123 09:07:16.791949  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:16.817380  401015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:16.817685  401015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1123 09:07:16.817716  401015 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-602386' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-602386/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-602386' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:07:16.962759  401015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:07:16.962788  401015 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:07:16.962817  401015 ubuntu.go:190] setting up certificates
	I1123 09:07:16.962835  401015 provision.go:84] configureAuth start
	I1123 09:07:16.962908  401015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:07:16.981134  401015 provision.go:143] copyHostCerts
	I1123 09:07:16.981206  401015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:07:16.981221  401015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:07:16.981308  401015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:07:16.981422  401015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:07:16.981435  401015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:07:16.981477  401015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:07:16.981564  401015 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:07:16.981575  401015 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:07:16.981613  401015 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:07:16.981692  401015 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-602386 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-602386 localhost minikube]
	I1123 09:07:17.019027  401015 provision.go:177] copyRemoteCerts
	I1123 09:07:17.019108  401015 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:07:17.019167  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:17.039250  401015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:07:17.140133  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:07:17.159922  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:07:17.176895  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1123 09:07:17.193533  401015 provision.go:87] duration metric: took 230.682483ms to configureAuth
	I1123 09:07:17.193561  401015 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:07:17.193714  401015 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:17.193813  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:17.211534  401015 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:17.211744  401015 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33103 <nil> <nil>}
	I1123 09:07:17.211758  401015 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:07:17.536040  401015 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:07:17.536067  401015 machine.go:97] duration metric: took 4.08975761s to provisionDockerMachine
	I1123 09:07:17.536078  401015 client.go:176] duration metric: took 8.995905286s to LocalClient.Create
	I1123 09:07:17.536099  401015 start.go:167] duration metric: took 8.995972936s to libmachine.API.Create "default-k8s-diff-port-602386"
	I1123 09:07:17.536108  401015 start.go:293] postStartSetup for "default-k8s-diff-port-602386" (driver="docker")
	I1123 09:07:17.536120  401015 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:07:17.536184  401015 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:07:17.536234  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:17.557086  401015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:07:17.663137  401015 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:07:17.666821  401015 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:07:17.666844  401015 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:07:17.666855  401015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:07:17.666918  401015 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:07:17.667019  401015 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:07:17.667135  401015 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:07:17.675199  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:07:17.696106  401015 start.go:296] duration metric: took 159.981057ms for postStartSetup
	I1123 09:07:17.696477  401015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:07:17.716079  401015 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json ...
	I1123 09:07:17.716371  401015 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:07:17.716432  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:17.737448  401015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:07:17.837276  401015 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:07:17.842028  401015 start.go:128] duration metric: took 9.304145366s to createHost
	I1123 09:07:17.842055  401015 start.go:83] releasing machines lock for "default-k8s-diff-port-602386", held for 9.304299946s
	I1123 09:07:17.842127  401015 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:07:17.862406  401015 ssh_runner.go:195] Run: cat /version.json
	I1123 09:07:17.862455  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:17.862478  401015 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:07:17.862558  401015 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:07:17.881896  401015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:07:17.883268  401015 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33103 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:07:18.037653  401015 ssh_runner.go:195] Run: systemctl --version
	I1123 09:07:18.044436  401015 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:07:18.086481  401015 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:07:18.092200  401015 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:07:18.092267  401015 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:07:18.126452  401015 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:07:18.126479  401015 start.go:496] detecting cgroup driver to use...
	I1123 09:07:18.126514  401015 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:07:18.126563  401015 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:07:18.148832  401015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:07:18.165131  401015 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:07:18.165189  401015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:07:18.186168  401015 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:07:18.208666  401015 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:07:18.313243  401015 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	W1123 09:07:15.372093  384612 node_ready.go:57] node "no-preload-619589" has "Ready":"False" status (will retry)
	I1123 09:07:17.372229  384612 node_ready.go:49] node "no-preload-619589" is "Ready"
	I1123 09:07:17.372263  384612 node_ready.go:38] duration metric: took 15.003099644s for node "no-preload-619589" to be "Ready" ...
	I1123 09:07:17.372279  384612 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:07:17.372335  384612 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:17.394458  384612 api_server.go:72] duration metric: took 15.353136002s to wait for apiserver process to appear ...
	I1123 09:07:17.394485  384612 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:07:17.394508  384612 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:07:17.400359  384612 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1123 09:07:17.401698  384612 api_server.go:141] control plane version: v1.34.1
	I1123 09:07:17.401725  384612 api_server.go:131] duration metric: took 7.231748ms to wait for apiserver health ...
	I1123 09:07:17.401736  384612 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:07:17.410117  384612 system_pods.go:59] 8 kube-system pods found
	I1123 09:07:17.410215  384612 system_pods.go:61] "coredns-66bc5c9577-dhxwz" [700b8476-b8f8-4865-9308-fc8b30ac5a5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:17.410242  384612 system_pods.go:61] "etcd-no-preload-619589" [c738ad56-564c-4400-ba9c-8ce7475fda42] Running
	I1123 09:07:17.410260  384612 system_pods.go:61] "kindnet-dp6kh" [a4901c5e-17b7-4174-a5d6-32fe5ec489a7] Running
	I1123 09:07:17.410287  384612 system_pods.go:61] "kube-apiserver-no-preload-619589" [d1ff4174-7096-4ddf-90a5-8c5809f096be] Running
	I1123 09:07:17.410296  384612 system_pods.go:61] "kube-controller-manager-no-preload-619589" [17b6ebd0-f2eb-4a0e-9656-7fb19cf255cc] Running
	I1123 09:07:17.410301  384612 system_pods.go:61] "kube-proxy-qbkwc" [ec82425a-3713-4d37-85b7-4fec7ae69b78] Running
	I1123 09:07:17.410348  384612 system_pods.go:61] "kube-scheduler-no-preload-619589" [cdf70351-7c44-43bf-be98-17eca3c3d388] Running
	I1123 09:07:17.410357  384612 system_pods.go:61] "storage-provisioner" [acbfaa48-c8ba-4200-b5d7-e8f168a2de80] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:17.410367  384612 system_pods.go:74] duration metric: took 8.622539ms to wait for pod list to return data ...
	I1123 09:07:17.410378  384612 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:07:17.414928  384612 default_sa.go:45] found service account: "default"
	I1123 09:07:17.414954  384612 default_sa.go:55] duration metric: took 4.568513ms for default service account to be created ...
	I1123 09:07:17.414976  384612 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:07:17.507929  384612 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:17.507960  384612 system_pods.go:89] "coredns-66bc5c9577-dhxwz" [700b8476-b8f8-4865-9308-fc8b30ac5a5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:17.507993  384612 system_pods.go:89] "etcd-no-preload-619589" [c738ad56-564c-4400-ba9c-8ce7475fda42] Running
	I1123 09:07:17.508001  384612 system_pods.go:89] "kindnet-dp6kh" [a4901c5e-17b7-4174-a5d6-32fe5ec489a7] Running
	I1123 09:07:17.508007  384612 system_pods.go:89] "kube-apiserver-no-preload-619589" [d1ff4174-7096-4ddf-90a5-8c5809f096be] Running
	I1123 09:07:17.508013  384612 system_pods.go:89] "kube-controller-manager-no-preload-619589" [17b6ebd0-f2eb-4a0e-9656-7fb19cf255cc] Running
	I1123 09:07:17.508020  384612 system_pods.go:89] "kube-proxy-qbkwc" [ec82425a-3713-4d37-85b7-4fec7ae69b78] Running
	I1123 09:07:17.508025  384612 system_pods.go:89] "kube-scheduler-no-preload-619589" [cdf70351-7c44-43bf-be98-17eca3c3d388] Running
	I1123 09:07:17.508042  384612 system_pods.go:89] "storage-provisioner" [acbfaa48-c8ba-4200-b5d7-e8f168a2de80] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:17.508074  384612 retry.go:31] will retry after 222.55578ms: missing components: kube-dns
	I1123 09:07:17.735277  384612 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:17.735317  384612 system_pods.go:89] "coredns-66bc5c9577-dhxwz" [700b8476-b8f8-4865-9308-fc8b30ac5a5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:17.735325  384612 system_pods.go:89] "etcd-no-preload-619589" [c738ad56-564c-4400-ba9c-8ce7475fda42] Running
	I1123 09:07:17.735334  384612 system_pods.go:89] "kindnet-dp6kh" [a4901c5e-17b7-4174-a5d6-32fe5ec489a7] Running
	I1123 09:07:17.735341  384612 system_pods.go:89] "kube-apiserver-no-preload-619589" [d1ff4174-7096-4ddf-90a5-8c5809f096be] Running
	I1123 09:07:17.735347  384612 system_pods.go:89] "kube-controller-manager-no-preload-619589" [17b6ebd0-f2eb-4a0e-9656-7fb19cf255cc] Running
	I1123 09:07:17.735356  384612 system_pods.go:89] "kube-proxy-qbkwc" [ec82425a-3713-4d37-85b7-4fec7ae69b78] Running
	I1123 09:07:17.735361  384612 system_pods.go:89] "kube-scheduler-no-preload-619589" [cdf70351-7c44-43bf-be98-17eca3c3d388] Running
	I1123 09:07:17.735371  384612 system_pods.go:89] "storage-provisioner" [acbfaa48-c8ba-4200-b5d7-e8f168a2de80] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:17.735390  384612 retry.go:31] will retry after 389.439054ms: missing components: kube-dns
	I1123 09:07:18.129634  384612 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:18.129671  384612 system_pods.go:89] "coredns-66bc5c9577-dhxwz" [700b8476-b8f8-4865-9308-fc8b30ac5a5f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:18.129679  384612 system_pods.go:89] "etcd-no-preload-619589" [c738ad56-564c-4400-ba9c-8ce7475fda42] Running
	I1123 09:07:18.129694  384612 system_pods.go:89] "kindnet-dp6kh" [a4901c5e-17b7-4174-a5d6-32fe5ec489a7] Running
	I1123 09:07:18.129713  384612 system_pods.go:89] "kube-apiserver-no-preload-619589" [d1ff4174-7096-4ddf-90a5-8c5809f096be] Running
	I1123 09:07:18.129719  384612 system_pods.go:89] "kube-controller-manager-no-preload-619589" [17b6ebd0-f2eb-4a0e-9656-7fb19cf255cc] Running
	I1123 09:07:18.129727  384612 system_pods.go:89] "kube-proxy-qbkwc" [ec82425a-3713-4d37-85b7-4fec7ae69b78] Running
	I1123 09:07:18.129732  384612 system_pods.go:89] "kube-scheduler-no-preload-619589" [cdf70351-7c44-43bf-be98-17eca3c3d388] Running
	I1123 09:07:18.129748  384612 system_pods.go:89] "storage-provisioner" [acbfaa48-c8ba-4200-b5d7-e8f168a2de80] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:18.129770  384612 retry.go:31] will retry after 389.277726ms: missing components: kube-dns
	I1123 09:07:18.526537  384612 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:18.526634  384612 system_pods.go:89] "coredns-66bc5c9577-dhxwz" [700b8476-b8f8-4865-9308-fc8b30ac5a5f] Running
	I1123 09:07:18.526662  384612 system_pods.go:89] "etcd-no-preload-619589" [c738ad56-564c-4400-ba9c-8ce7475fda42] Running
	I1123 09:07:18.526670  384612 system_pods.go:89] "kindnet-dp6kh" [a4901c5e-17b7-4174-a5d6-32fe5ec489a7] Running
	I1123 09:07:18.526676  384612 system_pods.go:89] "kube-apiserver-no-preload-619589" [d1ff4174-7096-4ddf-90a5-8c5809f096be] Running
	I1123 09:07:18.526683  384612 system_pods.go:89] "kube-controller-manager-no-preload-619589" [17b6ebd0-f2eb-4a0e-9656-7fb19cf255cc] Running
	I1123 09:07:18.526688  384612 system_pods.go:89] "kube-proxy-qbkwc" [ec82425a-3713-4d37-85b7-4fec7ae69b78] Running
	I1123 09:07:18.526693  384612 system_pods.go:89] "kube-scheduler-no-preload-619589" [cdf70351-7c44-43bf-be98-17eca3c3d388] Running
	I1123 09:07:18.526698  384612 system_pods.go:89] "storage-provisioner" [acbfaa48-c8ba-4200-b5d7-e8f168a2de80] Running
	I1123 09:07:18.526710  384612 system_pods.go:126] duration metric: took 1.111725004s to wait for k8s-apps to be running ...
	I1123 09:07:18.526721  384612 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:07:18.526780  384612 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:07:18.547171  384612 system_svc.go:56] duration metric: took 20.438265ms WaitForService to wait for kubelet
	I1123 09:07:18.547204  384612 kubeadm.go:587] duration metric: took 16.505886437s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:18.547369  384612 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:07:18.551143  384612 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:07:18.551171  384612 node_conditions.go:123] node cpu capacity is 8
	I1123 09:07:18.551187  384612 node_conditions.go:105] duration metric: took 3.798981ms to run NodePressure ...
	I1123 09:07:18.551204  384612 start.go:242] waiting for startup goroutines ...
	I1123 09:07:18.551214  384612 start.go:247] waiting for cluster config update ...
	I1123 09:07:18.551232  384612 start.go:256] writing updated cluster config ...
	I1123 09:07:18.551527  384612 ssh_runner.go:195] Run: rm -f paused
	I1123 09:07:18.556916  384612 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:18.561388  384612 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dhxwz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:18.568253  384612 pod_ready.go:94] pod "coredns-66bc5c9577-dhxwz" is "Ready"
	I1123 09:07:18.568280  384612 pod_ready.go:86] duration metric: took 6.865789ms for pod "coredns-66bc5c9577-dhxwz" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:18.570783  384612 pod_ready.go:83] waiting for pod "etcd-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:18.575047  384612 pod_ready.go:94] pod "etcd-no-preload-619589" is "Ready"
	I1123 09:07:18.575071  384612 pod_ready.go:86] duration metric: took 4.263651ms for pod "etcd-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:18.577446  384612 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:18.581550  384612 pod_ready.go:94] pod "kube-apiserver-no-preload-619589" is "Ready"
	I1123 09:07:18.581570  384612 pod_ready.go:86] duration metric: took 4.103469ms for pod "kube-apiserver-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:18.583515  384612 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:18.439919  401015 docker.go:234] disabling docker service ...
	I1123 09:07:18.440174  401015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:07:18.468609  401015 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:07:18.485178  401015 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:07:18.604900  401015 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:07:18.705060  401015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:07:18.725341  401015 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:07:18.740280  401015 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:07:18.740346  401015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:18.751128  401015 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:07:18.751203  401015 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:18.760962  401015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:18.770396  401015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:18.779415  401015 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:07:18.787895  401015 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:18.799405  401015 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:18.816085  401015 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:18.824844  401015 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:07:18.832925  401015 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:07:18.840670  401015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:18.923493  401015 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:07:19.083157  401015 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:07:19.083234  401015 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:07:19.087889  401015 start.go:564] Will wait 60s for crictl version
	I1123 09:07:19.087958  401015 ssh_runner.go:195] Run: which crictl
	I1123 09:07:19.092139  401015 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:07:19.125226  401015 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:07:19.125309  401015 ssh_runner.go:195] Run: crio --version
	I1123 09:07:19.171479  401015 ssh_runner.go:195] Run: crio --version
	I1123 09:07:19.209444  401015 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:07:18.963658  384612 pod_ready.go:94] pod "kube-controller-manager-no-preload-619589" is "Ready"
	I1123 09:07:18.963706  384612 pod_ready.go:86] duration metric: took 380.169358ms for pod "kube-controller-manager-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.162339  384612 pod_ready.go:83] waiting for pod "kube-proxy-qbkwc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.562012  384612 pod_ready.go:94] pod "kube-proxy-qbkwc" is "Ready"
	I1123 09:07:19.562037  384612 pod_ready.go:86] duration metric: took 399.666782ms for pod "kube-proxy-qbkwc" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:19.762390  384612 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:20.161777  384612 pod_ready.go:94] pod "kube-scheduler-no-preload-619589" is "Ready"
	I1123 09:07:20.161805  384612 pod_ready.go:86] duration metric: took 399.389346ms for pod "kube-scheduler-no-preload-619589" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:20.161822  384612 pod_ready.go:40] duration metric: took 1.604868948s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:20.216169  384612 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:07:20.218594  384612 out.go:179] * Done! kubectl is now configured to use "no-preload-619589" cluster and "default" namespace by default
	I1123 09:07:19.210548  401015 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-602386 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:07:19.227633  401015 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 09:07:19.231781  401015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:07:19.244540  401015 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:07:19.244707  401015 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:19.244779  401015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:07:19.287574  401015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:07:19.287602  401015 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:07:19.287671  401015 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:07:19.319083  401015 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:07:19.319107  401015 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:07:19.319116  401015 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1123 09:07:19.319219  401015 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-602386 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:07:19.319314  401015 ssh_runner.go:195] Run: crio config
	I1123 09:07:19.366233  401015 cni.go:84] Creating CNI manager for ""
	I1123 09:07:19.366256  401015 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:19.366273  401015 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:07:19.366301  401015 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-602386 NodeName:default-k8s-diff-port-602386 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:07:19.366440  401015 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-602386"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:07:19.366517  401015 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:07:19.375263  401015 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:07:19.375343  401015 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:07:19.383234  401015 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 09:07:19.396109  401015 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:07:19.410922  401015 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1123 09:07:19.423369  401015 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:07:19.426946  401015 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:07:19.436611  401015 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:19.523405  401015 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:07:19.549294  401015 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386 for IP: 192.168.94.2
	I1123 09:07:19.549315  401015 certs.go:195] generating shared ca certs ...
	I1123 09:07:19.549331  401015 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:19.549496  401015 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:07:19.549553  401015 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:07:19.549568  401015 certs.go:257] generating profile certs ...
	I1123 09:07:19.549637  401015 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.key
	I1123 09:07:19.549655  401015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.crt with IP's: []
	I1123 09:07:19.649607  401015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.crt ...
	I1123 09:07:19.649636  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.crt: {Name:mk5b2fe59582d8d46cac5b2bd63c890581c50d09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:19.649813  401015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.key ...
	I1123 09:07:19.649827  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.key: {Name:mkcf2767a7740d2aca1a5165fe4a051e1af24fda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:19.649905  401015 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key.0582d586
	I1123 09:07:19.649921  401015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt.0582d586 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1123 09:07:19.746469  401015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt.0582d586 ...
	I1123 09:07:19.746496  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt.0582d586: {Name:mk7f9bf82801ec5fc2ea5071cbc211e201c0f369 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:19.746662  401015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key.0582d586 ...
	I1123 09:07:19.746676  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key.0582d586: {Name:mk20422827218122dac2d73d8f8f064621c0f2c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:19.746760  401015 certs.go:382] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt.0582d586 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt
	I1123 09:07:19.746891  401015 certs.go:386] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key.0582d586 -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key
	I1123 09:07:19.746951  401015 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key
	I1123 09:07:19.746978  401015 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.crt with IP's: []
	I1123 09:07:19.789729  401015 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.crt ...
	I1123 09:07:19.789754  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.crt: {Name:mkb27e04c4a98897ab1b8352760acee9e2024096 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:19.789916  401015 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key ...
	I1123 09:07:19.789930  401015 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key: {Name:mk7872805d82e2378a5a0c4f8f30134a4a0bc4ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:19.790122  401015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:07:19.790166  401015 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:07:19.790183  401015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:07:19.790221  401015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:07:19.790251  401015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:07:19.790281  401015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:07:19.790349  401015 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:07:19.791090  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:07:19.810565  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:07:19.828331  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:07:19.845642  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:07:19.862324  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:07:19.878721  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:07:19.896582  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:07:19.913593  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:07:19.930808  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:07:19.949029  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:07:19.966411  401015 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:07:19.984209  401015 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:07:19.997358  401015 ssh_runner.go:195] Run: openssl version
	I1123 09:07:20.003473  401015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:07:20.011717  401015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:07:20.015358  401015 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:07:20.015405  401015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:07:20.049141  401015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:07:20.058176  401015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:07:20.066816  401015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:20.070528  401015 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:20.070578  401015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:20.105878  401015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:07:20.114916  401015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:07:20.123688  401015 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:07:20.128145  401015 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:07:20.128200  401015 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:07:20.173506  401015 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:07:20.184870  401015 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:07:20.189689  401015 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:07:20.189762  401015 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:20.189848  401015 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:07:20.189895  401015 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:07:20.225011  401015 cri.go:89] found id: ""
	I1123 09:07:20.225090  401015 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:07:20.235492  401015 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:07:20.246934  401015 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:07:20.247017  401015 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:07:20.264437  401015 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:07:20.264466  401015 kubeadm.go:158] found existing configuration files:
	
	I1123 09:07:20.264518  401015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1123 09:07:20.275374  401015 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:07:20.275439  401015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:07:20.284754  401015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1123 09:07:20.295461  401015 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:07:20.295537  401015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:07:20.306298  401015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1123 09:07:20.319098  401015 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:07:20.319176  401015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:07:20.330779  401015 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1123 09:07:20.342694  401015 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:07:20.342766  401015 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:07:20.354759  401015 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:07:20.410343  401015 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:07:20.410453  401015 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:07:20.441830  401015 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:07:20.442348  401015 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 09:07:20.442438  401015 kubeadm.go:319] OS: Linux
	I1123 09:07:20.442507  401015 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:07:20.442571  401015 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:07:20.442645  401015 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:07:20.442711  401015 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:07:20.442786  401015 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:07:20.442846  401015 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:07:20.442930  401015 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:07:20.443011  401015 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 09:07:20.520559  401015 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:07:20.520711  401015 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:07:20.520843  401015 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:07:20.530713  401015 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:07:22.235830  397302 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:07:22.235922  397302 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:07:22.236071  397302 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:07:22.236138  397302 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 09:07:22.236215  397302 kubeadm.go:319] OS: Linux
	I1123 09:07:22.236290  397302 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:07:22.236368  397302 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:07:22.236442  397302 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:07:22.236519  397302 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:07:22.236575  397302 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:07:22.236666  397302 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:07:22.236751  397302 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:07:22.236818  397302 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 09:07:22.236942  397302 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:07:22.237071  397302 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:07:22.237197  397302 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:07:22.237294  397302 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1123 09:07:22.238684  397302 out.go:252]   - Generating certificates and keys ...
	I1123 09:07:22.238751  397302 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:07:22.238832  397302 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:07:22.238893  397302 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:07:22.238950  397302 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:07:22.239031  397302 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:07:22.239077  397302 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:07:22.239121  397302 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:07:22.239224  397302 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-529341 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 09:07:22.239279  397302 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:07:22.239394  397302 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-529341 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1123 09:07:22.239470  397302 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:07:22.239539  397302 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:07:22.239607  397302 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:07:22.239701  397302 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:07:22.239749  397302 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:07:22.239798  397302 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:07:22.239847  397302 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:07:22.239907  397302 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:07:22.239956  397302 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:07:22.240067  397302 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:07:22.240230  397302 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:07:22.241697  397302 out.go:252]   - Booting up control plane ...
	I1123 09:07:22.241774  397302 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:07:22.241856  397302 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:07:22.241935  397302 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:07:22.242080  397302 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:07:22.242220  397302 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:07:22.242329  397302 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:07:22.242406  397302 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:07:22.242440  397302 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:07:22.242627  397302 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:07:22.242769  397302 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:07:22.242856  397302 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.420155ms
	I1123 09:07:22.243015  397302 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:07:22.243108  397302 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1123 09:07:22.243228  397302 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:07:22.243335  397302 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:07:22.243432  397302 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.155807151s
	I1123 09:07:22.243495  397302 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.38189193s
	I1123 09:07:22.243575  397302 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002103744s
	I1123 09:07:22.243675  397302 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:07:22.243786  397302 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:07:22.243842  397302 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:07:22.244089  397302 kubeadm.go:319] [mark-control-plane] Marking the node embed-certs-529341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:07:22.244169  397302 kubeadm.go:319] [bootstrap-token] Using token: zx2uia.the17ykcpofxcj65
	I1123 09:07:22.246489  397302 out.go:252]   - Configuring RBAC rules ...
	I1123 09:07:22.246634  397302 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:07:22.246761  397302 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:07:22.246874  397302 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:07:22.246993  397302 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:07:22.247111  397302 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:07:22.247227  397302 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:07:22.247365  397302 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:07:22.247428  397302 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:07:22.247503  397302 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:07:22.247516  397302 kubeadm.go:319] 
	I1123 09:07:22.247580  397302 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:07:22.247584  397302 kubeadm.go:319] 
	I1123 09:07:22.247653  397302 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:07:22.247660  397302 kubeadm.go:319] 
	I1123 09:07:22.247684  397302 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:07:22.247739  397302 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:07:22.247779  397302 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:07:22.247783  397302 kubeadm.go:319] 
	I1123 09:07:22.247824  397302 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:07:22.247827  397302 kubeadm.go:319] 
	I1123 09:07:22.247864  397302 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:07:22.247867  397302 kubeadm.go:319] 
	I1123 09:07:22.247907  397302 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:07:22.247980  397302 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:07:22.248037  397302 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:07:22.248043  397302 kubeadm.go:319] 
	I1123 09:07:22.248123  397302 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:07:22.248214  397302 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:07:22.248225  397302 kubeadm.go:319] 
	I1123 09:07:22.248356  397302 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token zx2uia.the17ykcpofxcj65 \
	I1123 09:07:22.248493  397302 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 \
	I1123 09:07:22.248521  397302 kubeadm.go:319] 	--control-plane 
	I1123 09:07:22.248529  397302 kubeadm.go:319] 
	I1123 09:07:22.248635  397302 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:07:22.248642  397302 kubeadm.go:319] 
	I1123 09:07:22.248709  397302 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token zx2uia.the17ykcpofxcj65 \
	I1123 09:07:22.248807  397302 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 
	I1123 09:07:22.248818  397302 cni.go:84] Creating CNI manager for ""
	I1123 09:07:22.248826  397302 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:22.250162  397302 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1123 09:07:20.533009  401015 out.go:252]   - Generating certificates and keys ...
	I1123 09:07:20.533119  401015 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:07:20.533224  401015 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:07:20.841504  401015 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:07:21.060777  401015 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:07:21.409075  401015 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:07:21.631043  401015 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:07:22.085083  401015 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:07:22.085287  401015 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-602386 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 09:07:22.297771  401015 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:07:22.297998  401015 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-602386 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1123 09:07:22.503729  401015 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:07:22.728204  401015 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:07:23.208782  401015 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:07:23.208962  401015 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:07:23.908442  401015 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:07:24.239222  401015 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:07:24.528457  401015 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:07:24.605042  401015 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:07:24.965699  401015 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:07:24.966356  401015 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:07:24.970283  401015 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:07:22.251226  397302 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:07:22.255657  397302 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:07:22.255680  397302 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:07:22.268963  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:07:22.495688  397302 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:07:22.495772  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:22.495772  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-529341 minikube.k8s.io/updated_at=2025_11_23T09_07_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=embed-certs-529341 minikube.k8s.io/primary=true
	I1123 09:07:22.576709  397302 ops.go:34] apiserver oom_adj: -16
	I1123 09:07:22.576866  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:23.077081  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:23.577003  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:24.077119  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:24.577540  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:25.077487  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:25.577943  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:26.077145  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:26.576921  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:27.077963  397302 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:07:27.169704  397302 kubeadm.go:1114] duration metric: took 4.674000226s to wait for elevateKubeSystemPrivileges
	I1123 09:07:27.169753  397302 kubeadm.go:403] duration metric: took 15.148429231s to StartCluster
	I1123 09:07:27.169778  397302 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:27.169867  397302 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:07:27.172415  397302 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:27.172740  397302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:07:27.172747  397302 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:07:27.172763  397302 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:07:27.172848  397302 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-529341"
	I1123 09:07:27.172868  397302 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-529341"
	I1123 09:07:27.172898  397302 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:07:27.172932  397302 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:27.173001  397302 addons.go:70] Setting default-storageclass=true in profile "embed-certs-529341"
	I1123 09:07:27.173016  397302 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-529341"
	I1123 09:07:27.173335  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:27.173468  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:27.174908  397302 out.go:179] * Verifying Kubernetes components...
	I1123 09:07:27.176439  397302 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:27.208110  397302 addons.go:239] Setting addon default-storageclass=true in "embed-certs-529341"
	I1123 09:07:27.208462  397302 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:07:27.209144  397302 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:07:27.209563  397302 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:07:27.210454  397302 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:07:27.210475  397302 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:07:27.210527  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:27.242987  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:27.258102  397302 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:07:27.258133  397302 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:07:27.258208  397302 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:07:27.286963  397302 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33098 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:07:27.293216  397302 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:07:27.384695  397302 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:07:27.397197  397302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:07:27.448559  397302 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:07:27.607323  397302 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1123 09:07:27.609010  397302 node_ready.go:35] waiting up to 6m0s for node "embed-certs-529341" to be "Ready" ...
	I1123 09:07:27.788624  397302 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:07:24.971841  401015 out.go:252]   - Booting up control plane ...
	I1123 09:07:24.971988  401015 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:07:24.972109  401015 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:07:24.972750  401015 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:07:24.987209  401015 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:07:24.987344  401015 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:07:24.994006  401015 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:07:24.994266  401015 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:07:24.994331  401015 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:07:25.097896  401015 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:07:25.098068  401015 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1123 09:07:26.101767  401015 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.003212031s
	I1123 09:07:26.112034  401015 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:07:26.112316  401015 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1123 09:07:26.112453  401015 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:07:26.112557  401015 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:07:27.611786  401015 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.499544748s
	I1123 09:07:28.233017  401015 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.12098108s
	
	
	==> CRI-O <==
	Nov 23 09:07:17 no-preload-619589 crio[763]: time="2025-11-23T09:07:17.399861597Z" level=info msg="Starting container: 8c7f42c057ab1dd20de0d8bb5a243f083ec5049e77817fa2b46444abd39864c7" id=a634b690-1c03-4db0-bc25-9f8f9f3f9ef2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:07:17 no-preload-619589 crio[763]: time="2025-11-23T09:07:17.402250886Z" level=info msg="Started container" PID=2926 containerID=8c7f42c057ab1dd20de0d8bb5a243f083ec5049e77817fa2b46444abd39864c7 description=kube-system/coredns-66bc5c9577-dhxwz/coredns id=a634b690-1c03-4db0-bc25-9f8f9f3f9ef2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09dc4fbf20840bc5d0e6f55b769664c58899217561ca411495c25037415773ed
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.726815405Z" level=info msg="Running pod sandbox: default/busybox/POD" id=886a422a-b1dd-423a-a8c7-89f61cfa0854 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.726902543Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.732711565Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:604e2a6d1caa6e4deb6f0daf0eb2bd1b476cb842cfd8fcb559582fec31287e25 UID:28bf9ee2-1ef2-48b8-81bb-3529cc01dc8c NetNS:/var/run/netns/6a7a95c8-8009-4387-8ab2-147203fb0506 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000138b00}] Aliases:map[]}"
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.732758297Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.744505173Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:604e2a6d1caa6e4deb6f0daf0eb2bd1b476cb842cfd8fcb559582fec31287e25 UID:28bf9ee2-1ef2-48b8-81bb-3529cc01dc8c NetNS:/var/run/netns/6a7a95c8-8009-4387-8ab2-147203fb0506 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000138b00}] Aliases:map[]}"
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.744646662Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.745677426Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.746898588Z" level=info msg="Ran pod sandbox 604e2a6d1caa6e4deb6f0daf0eb2bd1b476cb842cfd8fcb559582fec31287e25 with infra container: default/busybox/POD" id=886a422a-b1dd-423a-a8c7-89f61cfa0854 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.748338739Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=53783099-ed77-4ed1-a8e7-f761f4d48649 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.748469176Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=53783099-ed77-4ed1-a8e7-f761f4d48649 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.748517445Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=53783099-ed77-4ed1-a8e7-f761f4d48649 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.749150766Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=62a2a195-49d5-4e75-9d5d-8641191dc53e name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:20 no-preload-619589 crio[763]: time="2025-11-23T09:07:20.750960514Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.738848716Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=62a2a195-49d5-4e75-9d5d-8641191dc53e name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.739538047Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fce56c81-ca40-4b4d-85d5-a5e252f50748 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.74093445Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fd086900-535b-4a06-a5d1-3e7523fe38e8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.744253009Z" level=info msg="Creating container: default/busybox/busybox" id=9ba62361-9e1f-4cf4-9e51-cc582ef0939b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.744385682Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.748920042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.749341369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.791529286Z" level=info msg="Created container 548b580a8a188d9c3121c88838317d4bc4785899a4e0c8e5fac354303be1d538: default/busybox/busybox" id=9ba62361-9e1f-4cf4-9e51-cc582ef0939b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.792143103Z" level=info msg="Starting container: 548b580a8a188d9c3121c88838317d4bc4785899a4e0c8e5fac354303be1d538" id=8b52cbdb-8124-4ff0-aea0-241b3be54b85 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:07:22 no-preload-619589 crio[763]: time="2025-11-23T09:07:22.794301765Z" level=info msg="Started container" PID=3000 containerID=548b580a8a188d9c3121c88838317d4bc4785899a4e0c8e5fac354303be1d538 description=default/busybox/busybox id=8b52cbdb-8124-4ff0-aea0-241b3be54b85 name=/runtime.v1.RuntimeService/StartContainer sandboxID=604e2a6d1caa6e4deb6f0daf0eb2bd1b476cb842cfd8fcb559582fec31287e25
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	548b580a8a188       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   604e2a6d1caa6       busybox                                     default
	8c7f42c057ab1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   09dc4fbf20840       coredns-66bc5c9577-dhxwz                    kube-system
	bd9142fd437b3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   50fab948451b2       storage-provisioner                         kube-system
	2edfa21882754       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   f166974dafe9c       kindnet-dp6kh                               kube-system
	be599bbdf840d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      27 seconds ago      Running             kube-proxy                0                   6bb09c1389269       kube-proxy-qbkwc                            kube-system
	a677a50992f6c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      38 seconds ago      Running             kube-controller-manager   0                   21dc9bc1d54e5       kube-controller-manager-no-preload-619589   kube-system
	5821e0ff0466b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      38 seconds ago      Running             kube-apiserver            0                   49d47103fffcd       kube-apiserver-no-preload-619589            kube-system
	12253b68028e9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      38 seconds ago      Running             etcd                      0                   593c0fddc70fb       etcd-no-preload-619589                      kube-system
	19e7180d30e2a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      38 seconds ago      Running             kube-scheduler            0                   f25ebca0f41c0       kube-scheduler-no-preload-619589            kube-system
	
	
	==> coredns [8c7f42c057ab1dd20de0d8bb5a243f083ec5049e77817fa2b46444abd39864c7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45313 - 36639 "HINFO IN 5997540166380966446.8342322405175848259. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014849084s
	
	
	==> describe nodes <==
	Name:               no-preload-619589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-619589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-619589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_06_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:06:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-619589
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:07:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:07:26 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:07:26 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:07:26 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:07:26 +0000   Sun, 23 Nov 2025 09:07:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-619589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3483a19d-ff48-49f2-b35e-7cee468a4ef8
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-dhxwz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     28s
	  kube-system                 etcd-no-preload-619589                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         33s
	  kube-system                 kindnet-dp6kh                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-no-preload-619589             250m (3%)     0 (0%)      0 (0%)           0 (0%)         33s
	  kube-system                 kube-controller-manager-no-preload-619589    200m (2%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-proxy-qbkwc                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-no-preload-619589             100m (1%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 27s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  33s   kubelet          Node no-preload-619589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    33s   kubelet          Node no-preload-619589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     33s   kubelet          Node no-preload-619589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s   node-controller  Node no-preload-619589 event: Registered Node no-preload-619589 in Controller
	  Normal  NodeReady                13s   kubelet          Node no-preload-619589 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [12253b68028e93d50cabaaef6b26417a86f28c658d10574c0e33c91cf241ea6a] <==
	{"level":"warn","ts":"2025-11-23T09:06:52.937206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:52.945611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:52.962854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:52.969539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:52.975617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:52.983074Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:52.991734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:52.999853Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.007594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56734","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.014884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.022479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.031871Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.049838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56824","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.060013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.072424Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.084639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.092909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.103053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.122338Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.133253Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.145121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:06:53.215706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57014","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:07:05.564738Z","caller":"traceutil/trace.go:172","msg":"trace[1140338670] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"114.115586ms","start":"2025-11-23T09:07:05.450605Z","end":"2025-11-23T09:07:05.564720Z","steps":["trace[1140338670] 'process raft request'  (duration: 18.46379ms)","trace[1140338670] 'compare'  (duration: 95.53016ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:07:06.451361Z","caller":"traceutil/trace.go:172","msg":"trace[2113687516] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"110.642613ms","start":"2025-11-23T09:07:06.340700Z","end":"2025-11-23T09:07:06.451342Z","steps":["trace[2113687516] 'process raft request'  (duration: 110.533194ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:07:10.711477Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.087594ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597251121797871 > lease_revoke:<id:06ed9aaff74dd8f7>","response":"size:28"}
	
	
	==> kernel <==
	 09:07:30 up  1:49,  0 user,  load average: 4.67, 3.92, 2.55
	Linux no-preload-619589 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2edfa21882754db3f564f9a4d0519e97c0b16ccf4615cb69cc8b30a314861610] <==
	I1123 09:07:06.677796       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:07:06.772092       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:07:06.772279       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:07:06.772301       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:07:06.772332       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:07:06Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:07:06.975868       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:07:06.975942       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:07:06.975962       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:07:06.976243       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:07:07.376749       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:07:07.376772       1 metrics.go:72] Registering metrics
	I1123 09:07:07.376834       1 controller.go:711] "Syncing nftables rules"
	I1123 09:07:16.981059       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:07:16.981105       1 main.go:301] handling current node
	I1123 09:07:26.975732       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:07:26.975774       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5821e0ff0466be5b148d736ca22771498a373810189bbfe7786cfd41100d3c32] <==
	I1123 09:06:53.802015       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 09:06:53.807322       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:06:53.807555       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:06:53.812779       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:06:53.813164       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:06:53.818557       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:06:53.837342       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:06:54.683553       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:06:54.689400       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:06:54.689420       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:06:55.228584       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:06:55.277655       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:06:55.387338       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:06:55.394179       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1123 09:06:55.395260       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:06:55.399907       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:06:55.721937       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:06:56.430608       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:06:56.443104       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:06:56.452629       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:07:01.325822       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:01.334096       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:01.474023       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 09:07:01.774234       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 09:07:28.508154       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:38360: use of closed network connection
	
	
	==> kube-controller-manager [a677a50992f6c0e0a44e2202c65def667ca7969c7e27f9b8df82c0ea660e550f] <==
	I1123 09:07:00.718227       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:07:00.718237       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:07:00.719021       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:07:00.719839       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:07:00.720058       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:07:00.720315       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:07:00.720359       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:07:00.720419       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:07:00.720545       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:07:00.720558       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:07:00.720627       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:07:00.720771       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:07:00.720862       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:07:00.721362       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:07:00.725444       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:07:00.725485       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:07:00.727051       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:07:00.727131       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:07:00.727200       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:07:00.727216       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:07:00.727224       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:07:00.733625       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:07:00.742998       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-619589" podCIDRs=["10.244.0.0/24"]
	I1123 09:07:00.748653       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:07:20.660638       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [be599bbdf840d5fdbe49e0999885d4ad78c8fb6c519b80bb8dd1f81625eaec47] <==
	I1123 09:07:02.530377       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:07:02.609748       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:07:02.711833       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:07:02.711871       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:07:02.712034       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:07:02.733278       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:02.733387       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:07:02.739789       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:07:02.740201       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:07:02.740241       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:02.742481       1 config.go:200] "Starting service config controller"
	I1123 09:07:02.742528       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:07:02.742585       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:07:02.742597       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:07:02.742615       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:07:02.742620       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:07:02.743276       1 config.go:309] "Starting node config controller"
	I1123 09:07:02.743302       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:07:02.743310       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:07:02.842697       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:07:02.842706       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:07:02.842721       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [19e7180d30e2a613e3dadb43bb7953879bc2d151b86b5909972109a2f0981a45] <==
	E1123 09:06:53.757265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:06:53.756525       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:06:53.757110       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:06:53.756529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:06:53.757425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:06:53.757425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:06:53.757819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:06:53.757909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:06:53.757923       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:06:53.758051       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:06:53.758093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:06:53.758265       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:06:53.758040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:06:54.567161       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:06:54.594697       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:06:54.639128       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:06:54.681663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:06:54.717093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:06:54.751515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:06:54.772040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:06:54.800280       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:06:54.832187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:06:55.000431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:06:55.045508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 09:06:57.452834       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.529496    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4901c5e-17b7-4174-a5d6-32fe5ec489a7-lib-modules\") pod \"kindnet-dp6kh\" (UID: \"a4901c5e-17b7-4174-a5d6-32fe5ec489a7\") " pod="kube-system/kindnet-dp6kh"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.529833    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9snxs\" (UniqueName: \"kubernetes.io/projected/a4901c5e-17b7-4174-a5d6-32fe5ec489a7-kube-api-access-9snxs\") pod \"kindnet-dp6kh\" (UID: \"a4901c5e-17b7-4174-a5d6-32fe5ec489a7\") " pod="kube-system/kindnet-dp6kh"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.529923    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4901c5e-17b7-4174-a5d6-32fe5ec489a7-xtables-lock\") pod \"kindnet-dp6kh\" (UID: \"a4901c5e-17b7-4174-a5d6-32fe5ec489a7\") " pod="kube-system/kindnet-dp6kh"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.530085    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt67v\" (UniqueName: \"kubernetes.io/projected/ec82425a-3713-4d37-85b7-4fec7ae69b78-kube-api-access-wt67v\") pod \"kube-proxy-qbkwc\" (UID: \"ec82425a-3713-4d37-85b7-4fec7ae69b78\") " pod="kube-system/kube-proxy-qbkwc"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.530127    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ec82425a-3713-4d37-85b7-4fec7ae69b78-kube-proxy\") pod \"kube-proxy-qbkwc\" (UID: \"ec82425a-3713-4d37-85b7-4fec7ae69b78\") " pod="kube-system/kube-proxy-qbkwc"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.530341    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a4901c5e-17b7-4174-a5d6-32fe5ec489a7-cni-cfg\") pod \"kindnet-dp6kh\" (UID: \"a4901c5e-17b7-4174-a5d6-32fe5ec489a7\") " pod="kube-system/kindnet-dp6kh"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.530371    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ec82425a-3713-4d37-85b7-4fec7ae69b78-xtables-lock\") pod \"kube-proxy-qbkwc\" (UID: \"ec82425a-3713-4d37-85b7-4fec7ae69b78\") " pod="kube-system/kube-proxy-qbkwc"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: I1123 09:07:01.530540    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ec82425a-3713-4d37-85b7-4fec7ae69b78-lib-modules\") pod \"kube-proxy-qbkwc\" (UID: \"ec82425a-3713-4d37-85b7-4fec7ae69b78\") " pod="kube-system/kube-proxy-qbkwc"
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: E1123 09:07:01.638366    2326 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: E1123 09:07:01.638409    2326 projected.go:196] Error preparing data for projected volume kube-api-access-9snxs for pod kube-system/kindnet-dp6kh: configmap "kube-root-ca.crt" not found
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: E1123 09:07:01.638365    2326 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: E1123 09:07:01.638515    2326 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4901c5e-17b7-4174-a5d6-32fe5ec489a7-kube-api-access-9snxs podName:a4901c5e-17b7-4174-a5d6-32fe5ec489a7 nodeName:}" failed. No retries permitted until 2025-11-23 09:07:02.138483492 +0000 UTC m=+5.930293247 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9snxs" (UniqueName: "kubernetes.io/projected/a4901c5e-17b7-4174-a5d6-32fe5ec489a7-kube-api-access-9snxs") pod "kindnet-dp6kh" (UID: "a4901c5e-17b7-4174-a5d6-32fe5ec489a7") : configmap "kube-root-ca.crt" not found
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: E1123 09:07:01.638525    2326 projected.go:196] Error preparing data for projected volume kube-api-access-wt67v for pod kube-system/kube-proxy-qbkwc: configmap "kube-root-ca.crt" not found
	Nov 23 09:07:01 no-preload-619589 kubelet[2326]: E1123 09:07:01.638601    2326 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ec82425a-3713-4d37-85b7-4fec7ae69b78-kube-api-access-wt67v podName:ec82425a-3713-4d37-85b7-4fec7ae69b78 nodeName:}" failed. No retries permitted until 2025-11-23 09:07:02.138582814 +0000 UTC m=+5.930392552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-wt67v" (UniqueName: "kubernetes.io/projected/ec82425a-3713-4d37-85b7-4fec7ae69b78-kube-api-access-wt67v") pod "kube-proxy-qbkwc" (UID: "ec82425a-3713-4d37-85b7-4fec7ae69b78") : configmap "kube-root-ca.crt" not found
	Nov 23 09:07:03 no-preload-619589 kubelet[2326]: I1123 09:07:03.365728    2326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbkwc" podStartSLOduration=2.365704914 podStartE2EDuration="2.365704914s" podCreationTimestamp="2025-11-23 09:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:03.365543195 +0000 UTC m=+7.157352952" watchObservedRunningTime="2025-11-23 09:07:03.365704914 +0000 UTC m=+7.157514670"
	Nov 23 09:07:07 no-preload-619589 kubelet[2326]: I1123 09:07:07.382730    2326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-dp6kh" podStartSLOduration=2.3024456349999998 podStartE2EDuration="6.382709752s" podCreationTimestamp="2025-11-23 09:07:01 +0000 UTC" firstStartedPulling="2025-11-23 09:07:02.412271792 +0000 UTC m=+6.204081542" lastFinishedPulling="2025-11-23 09:07:06.492535911 +0000 UTC m=+10.284345659" observedRunningTime="2025-11-23 09:07:07.38268855 +0000 UTC m=+11.174498303" watchObservedRunningTime="2025-11-23 09:07:07.382709752 +0000 UTC m=+11.174519507"
	Nov 23 09:07:16 no-preload-619589 kubelet[2326]: I1123 09:07:16.993017    2326 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:07:17 no-preload-619589 kubelet[2326]: I1123 09:07:17.046003    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/700b8476-b8f8-4865-9308-fc8b30ac5a5f-config-volume\") pod \"coredns-66bc5c9577-dhxwz\" (UID: \"700b8476-b8f8-4865-9308-fc8b30ac5a5f\") " pod="kube-system/coredns-66bc5c9577-dhxwz"
	Nov 23 09:07:17 no-preload-619589 kubelet[2326]: I1123 09:07:17.046052    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/acbfaa48-c8ba-4200-b5d7-e8f168a2de80-tmp\") pod \"storage-provisioner\" (UID: \"acbfaa48-c8ba-4200-b5d7-e8f168a2de80\") " pod="kube-system/storage-provisioner"
	Nov 23 09:07:17 no-preload-619589 kubelet[2326]: I1123 09:07:17.046084    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmf4b\" (UniqueName: \"kubernetes.io/projected/700b8476-b8f8-4865-9308-fc8b30ac5a5f-kube-api-access-pmf4b\") pod \"coredns-66bc5c9577-dhxwz\" (UID: \"700b8476-b8f8-4865-9308-fc8b30ac5a5f\") " pod="kube-system/coredns-66bc5c9577-dhxwz"
	Nov 23 09:07:17 no-preload-619589 kubelet[2326]: I1123 09:07:17.046168    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd9xd\" (UniqueName: \"kubernetes.io/projected/acbfaa48-c8ba-4200-b5d7-e8f168a2de80-kube-api-access-nd9xd\") pod \"storage-provisioner\" (UID: \"acbfaa48-c8ba-4200-b5d7-e8f168a2de80\") " pod="kube-system/storage-provisioner"
	Nov 23 09:07:18 no-preload-619589 kubelet[2326]: I1123 09:07:18.406408    2326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-dhxwz" podStartSLOduration=17.406386809 podStartE2EDuration="17.406386809s" podCreationTimestamp="2025-11-23 09:07:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:18.406097331 +0000 UTC m=+22.197907086" watchObservedRunningTime="2025-11-23 09:07:18.406386809 +0000 UTC m=+22.198196567"
	Nov 23 09:07:20 no-preload-619589 kubelet[2326]: I1123 09:07:20.418946    2326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=18.41891669 podStartE2EDuration="18.41891669s" podCreationTimestamp="2025-11-23 09:07:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:18.440193538 +0000 UTC m=+22.232003300" watchObservedRunningTime="2025-11-23 09:07:20.41891669 +0000 UTC m=+24.210726440"
	Nov 23 09:07:20 no-preload-619589 kubelet[2326]: I1123 09:07:20.469852    2326 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5k6n\" (UniqueName: \"kubernetes.io/projected/28bf9ee2-1ef2-48b8-81bb-3529cc01dc8c-kube-api-access-j5k6n\") pod \"busybox\" (UID: \"28bf9ee2-1ef2-48b8-81bb-3529cc01dc8c\") " pod="default/busybox"
	Nov 23 09:07:23 no-preload-619589 kubelet[2326]: I1123 09:07:23.419231    2326 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.427596171 podStartE2EDuration="3.419208622s" podCreationTimestamp="2025-11-23 09:07:20 +0000 UTC" firstStartedPulling="2025-11-23 09:07:20.748773294 +0000 UTC m=+24.540583041" lastFinishedPulling="2025-11-23 09:07:22.740385751 +0000 UTC m=+26.532195492" observedRunningTime="2025-11-23 09:07:23.419044475 +0000 UTC m=+27.210854230" watchObservedRunningTime="2025-11-23 09:07:23.419208622 +0000 UTC m=+27.211018377"
	
	
	==> storage-provisioner [bd9142fd437b3daf2277b8d72ea1d41edc838d5b3183a43879c2cc43937a117f] <==
	I1123 09:07:17.411914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:07:17.421531       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:07:17.421592       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:07:17.424019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:17.429652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:07:17.429943       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:07:17.430261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-619589_aa100d56-a3a4-4eed-bb30-ff128281a37d!
	I1123 09:07:17.430187       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fa7c393f-59b5-485f-af92-2bac9d5f4377", APIVersion:"v1", ResourceVersion:"414", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-619589_aa100d56-a3a4-4eed-bb30-ff128281a37d became leader
	W1123 09:07:17.435594       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:17.444718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:07:17.531398       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-619589_aa100d56-a3a4-4eed-bb30-ff128281a37d!
	W1123 09:07:19.448453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:19.452838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:21.456673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:21.465276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:23.469243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:23.473886       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:25.477789       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:25.481850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:27.486845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:27.493749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:29.497014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:29.500986       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-619589 -n no-preload-619589
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-619589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (258.50763ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:07:51Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-529341 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-529341 describe deploy/metrics-server -n kube-system: exit status 1 (57.83191ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-529341 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-529341
helpers_test.go:243: (dbg) docker inspect embed-certs-529341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc",
	        "Created": "2025-11-23T09:07:06.148431191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 399879,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:07:06.494337671Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/hosts",
	        "LogPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc-json.log",
	        "Name": "/embed-certs-529341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-529341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-529341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc",
	                "LowerDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-529341",
	                "Source": "/var/lib/docker/volumes/embed-certs-529341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-529341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-529341",
	                "name.minikube.sigs.k8s.io": "embed-certs-529341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "21b94515e990d671dfa2e9d56c8120b00e248dbc8e98cac5215cf1ffe2c75281",
	            "SandboxKey": "/var/run/docker/netns/21b94515e990",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-529341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b80987257b925fe4e7d7324e318d1724b2e83e5fe12e18005bf9298153219f99",
	                    "EndpointID": "0bfab0f93a6820ad38d8025c07237611509d89b862756edcc27a999a32293911",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "ea:18:c8:45:3f:f9",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-529341",
	                        "cd25ec65ad7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-529341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-529341 logs -n 25: (1.080649434s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo containerd config dump                                                                                                                                                                                                  │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo crio config                                                                                                                                                                                                             │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p bridge-741183                                                                                                                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                                                                                               │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p no-preload-619589 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:07:47
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:07:47.433038  409946 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:07:47.433329  409946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:47.433340  409946 out.go:374] Setting ErrFile to fd 2...
	I1123 09:07:47.433346  409946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:47.433568  409946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:07:47.434047  409946 out.go:368] Setting JSON to false
	I1123 09:07:47.435256  409946 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6607,"bootTime":1763882260,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:07:47.435315  409946 start.go:143] virtualization: kvm guest
	I1123 09:07:47.437173  409946 out.go:179] * [no-preload-619589] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:07:47.438290  409946 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:07:47.438300  409946 notify.go:221] Checking for updates...
	I1123 09:07:47.440719  409946 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:07:47.441714  409946 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:07:47.442677  409946 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:07:47.443689  409946 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:07:47.444737  409946 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:07:47.446257  409946 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:47.446828  409946 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:07:47.470401  409946 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:07:47.470578  409946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:47.531859  409946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:07:47.520365502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:47.532018  409946 docker.go:319] overlay module found
	I1123 09:07:47.533728  409946 out.go:179] * Using the docker driver based on existing profile
	I1123 09:07:47.534776  409946 start.go:309] selected driver: docker
	I1123 09:07:47.534793  409946 start.go:927] validating driver "docker" against &{Name:no-preload-619589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-619589 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:47.534880  409946 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:07:47.535511  409946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:47.596171  409946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:07:47.586731934 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:47.596488  409946 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:47.596521  409946 cni.go:84] Creating CNI manager for ""
	I1123 09:07:47.596575  409946 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:47.596642  409946 start.go:353] cluster config:
	{Name:no-preload-619589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-619589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:47.598398  409946 out.go:179] * Starting "no-preload-619589" primary control-plane node in "no-preload-619589" cluster
	I1123 09:07:47.599465  409946 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:07:47.600568  409946 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:07:47.601700  409946 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:47.601794  409946 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:07:47.601841  409946 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/config.json ...
	I1123 09:07:47.602158  409946 cache.go:107] acquiring lock: {Name:mk7ecb1d61353190a66ac7e6ba6d7eceff124308 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602188  409946 cache.go:107] acquiring lock: {Name:mkcd32e302108894f2df717f3cf5b3ecc1441854 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602216  409946 cache.go:107] acquiring lock: {Name:mk8d2515122be33e67ac10c187be7568a955083e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602234  409946 cache.go:107] acquiring lock: {Name:mk379accc7020b7a3caf0ec5d82ca28cbbc76a2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602306  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 09:07:47.602309  409946 cache.go:107] acquiring lock: {Name:mk4c9dc3c03a83bc838f3aacfba91088513abdd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602331  409946 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 163.138µs
	I1123 09:07:47.602329  409946 cache.go:107] acquiring lock: {Name:mkd3cdfd6b4407abefc43a25dbf6321eab12e5b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602348  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 09:07:47.602356  409946 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 09:07:47.602287  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 09:07:47.602361  409946 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 164.806µs
	I1123 09:07:47.602381  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 09:07:47.602394  409946 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 09:07:47.602396  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 09:07:47.602267  409946 cache.go:107] acquiring lock: {Name:mk029a98d17710f60c9431e6478449e7182acb86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602402  409946 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 99.714µs
	I1123 09:07:47.602406  409946 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 213.953µs
	I1123 09:07:47.602413  409946 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 09:07:47.602416  409946 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 09:07:47.602377  409946 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 236.623µs
	I1123 09:07:47.602428  409946 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 09:07:47.602434  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 09:07:47.602296  409946 cache.go:107] acquiring lock: {Name:mk535005b718e5b2e2a19960fd3637ce47e59ae7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602434  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 09:07:47.602442  409946 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 220.718µs
	I1123 09:07:47.602451  409946 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 09:07:47.602448  409946 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 122.924µs
	I1123 09:07:47.602468  409946 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 09:07:47.602471  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 09:07:47.602484  409946 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 195.122µs
	I1123 09:07:47.602495  409946 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 09:07:47.602508  409946 cache.go:87] Successfully saved all images to host disk.
	I1123 09:07:47.625922  409946 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:07:47.625947  409946 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:07:47.625995  409946 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:07:47.626046  409946 start.go:360] acquireMachinesLock for no-preload-619589: {Name:mk679054b670a8ea923c71659f0f4888c22bf79b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.626120  409946 start.go:364] duration metric: took 51.356µs to acquireMachinesLock for "no-preload-619589"
	I1123 09:07:47.626142  409946 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:07:47.626148  409946 fix.go:54] fixHost starting: 
	I1123 09:07:47.626445  409946 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:07:47.649066  409946 fix.go:112] recreateIfNeeded on no-preload-619589: state=Stopped err=<nil>
	W1123 09:07:47.649104  409946 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 09:07:45.271169  401015 node_ready.go:57] node "default-k8s-diff-port-602386" has "Ready":"False" status (will retry)
	W1123 09:07:47.271685  401015 node_ready.go:57] node "default-k8s-diff-port-602386" has "Ready":"False" status (will retry)
	I1123 09:07:47.772070  401015 node_ready.go:49] node "default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:47.772100  401015 node_ready.go:38] duration metric: took 12.004063012s for node "default-k8s-diff-port-602386" to be "Ready" ...
	I1123 09:07:47.772120  401015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:07:47.772173  401015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:47.788797  401015 api_server.go:72] duration metric: took 12.300702565s to wait for apiserver process to appear ...
	I1123 09:07:47.788827  401015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:07:47.788852  401015 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1123 09:07:47.793700  401015 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1123 09:07:47.794993  401015 api_server.go:141] control plane version: v1.34.1
	I1123 09:07:47.795026  401015 api_server.go:131] duration metric: took 6.189285ms to wait for apiserver health ...
	I1123 09:07:47.795038  401015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:07:47.801006  401015 system_pods.go:59] 8 kube-system pods found
	I1123 09:07:47.801056  401015 system_pods.go:61] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:47.801064  401015 system_pods.go:61] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:47.801072  401015 system_pods.go:61] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:47.801078  401015 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:47.802944  401015 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:47.802978  401015 system_pods.go:61] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:47.802987  401015 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:47.803000  401015 system_pods.go:61] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:47.803009  401015 system_pods.go:74] duration metric: took 7.964015ms to wait for pod list to return data ...
	I1123 09:07:47.803032  401015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:07:47.811703  401015 default_sa.go:45] found service account: "default"
	I1123 09:07:47.811744  401015 default_sa.go:55] duration metric: took 8.703815ms for default service account to be created ...
	I1123 09:07:47.811764  401015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:07:47.814987  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:47.815030  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:47.815039  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:47.815048  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:47.815054  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:47.815060  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:47.815065  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:47.815070  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:47.815075  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:47.815111  401015 retry.go:31] will retry after 199.782533ms: missing components: kube-dns
	I1123 09:07:48.022278  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:48.022316  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:48.022326  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:48.022335  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:48.022341  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:48.022347  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:48.022353  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:48.022359  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:48.022368  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:48.022388  401015 retry.go:31] will retry after 299.362918ms: missing components: kube-dns
	I1123 09:07:48.326631  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:48.326678  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:48.326686  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:48.326694  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:48.326701  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:48.326707  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:48.326713  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:48.326718  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:48.326726  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:48.326775  401015 retry.go:31] will retry after 405.595607ms: missing components: kube-dns
	I1123 09:07:48.736501  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:48.736531  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:48.736536  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:48.736542  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:48.736546  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:48.736551  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:48.736555  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:48.736558  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:48.736565  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:48.736584  401015 retry.go:31] will retry after 597.521925ms: missing components: kube-dns
	I1123 09:07:49.338640  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:49.338677  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Running
	I1123 09:07:49.338684  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:49.338690  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:49.338694  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:49.338698  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:49.338701  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:49.338704  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:49.338707  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Running
	I1123 09:07:49.338715  401015 system_pods.go:126] duration metric: took 1.526943774s to wait for k8s-apps to be running ...
	I1123 09:07:49.338723  401015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:07:49.338767  401015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:07:49.351457  401015 system_svc.go:56] duration metric: took 12.724784ms WaitForService to wait for kubelet
	I1123 09:07:49.351488  401015 kubeadm.go:587] duration metric: took 13.863399853s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:49.351509  401015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:07:49.354278  401015 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:07:49.354302  401015 node_conditions.go:123] node cpu capacity is 8
	I1123 09:07:49.354317  401015 node_conditions.go:105] duration metric: took 2.803369ms to run NodePressure ...
	I1123 09:07:49.354329  401015 start.go:242] waiting for startup goroutines ...
	I1123 09:07:49.354338  401015 start.go:247] waiting for cluster config update ...
	I1123 09:07:49.354348  401015 start.go:256] writing updated cluster config ...
	I1123 09:07:49.354580  401015 ssh_runner.go:195] Run: rm -f paused
	I1123 09:07:49.358289  401015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:49.361627  401015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.365494  401015 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:07:49.365518  401015 pod_ready.go:86] duration metric: took 3.86978ms for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.367291  401015 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.370769  401015 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:49.370786  401015 pod_ready.go:86] duration metric: took 3.474711ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.372448  401015 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.375693  401015 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:49.375710  401015 pod_ready.go:86] duration metric: took 3.240891ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.377493  401015 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.762835  401015 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:49.762861  401015 pod_ready.go:86] duration metric: took 385.352648ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.963351  401015 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.362793  401015 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:07:50.362822  401015 pod_ready.go:86] duration metric: took 399.44633ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.563494  401015 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.962787  401015 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:50.962814  401015 pod_ready.go:86] duration metric: took 399.291647ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.962828  401015 pod_ready.go:40] duration metric: took 1.604507271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:51.006685  401015 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:07:51.008507  401015 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	W1123 09:07:46.419800  406807 pod_ready.go:104] pod "coredns-5dd5756b68-whp8m" is not "Ready", error: <nil>
	W1123 09:07:48.919893  406807 pod_ready.go:104] pod "coredns-5dd5756b68-whp8m" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 09:07:38 embed-certs-529341 crio[775]: time="2025-11-23T09:07:38.794610826Z" level=info msg="Started container" PID=1825 containerID=50c12036dc729d9603dc1e29f4ca838b0d6b7f8a54ae8be5ea98711ea5735544 description=kube-system/storage-provisioner/storage-provisioner id=1fdf7da7-b37e-4674-9b92-1c4806b4d95d name=/runtime.v1.RuntimeService/StartContainer sandboxID=38f2b7a624cbc1f3eb41dc52b0a464c53d51cdf4a30aa9de4152253d07aa5b96
	Nov 23 09:07:38 embed-certs-529341 crio[775]: time="2025-11-23T09:07:38.802232511Z" level=info msg="Started container" PID=1828 containerID=4cdaa3a069d9b398246aee655cf8d9ecaf67cb48314772071c97f69188c264f7 description=kube-system/coredns-66bc5c9577-k4bmj/coredns id=52439ce7-80a7-474e-a4fd-47121bc3fe06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4768c39db96430bd6cc22d51b0b57e43f96ec4f5817cc74e317c38c20a89b693
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.790050954Z" level=info msg="Running pod sandbox: default/busybox/POD" id=e9aec48a-e0e2-4b92-9814-803b49c942db name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.790131861Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.796493177Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:504904255cfc8ac4cfac4c4286ff6cee2bc82feb18e8db0637138c39a9686809 UID:05390c6f-b2aa-4701-8a3d-9119282e9b94 NetNS:/var/run/netns/14c1d9f2-bfc0-46a4-9e74-ecd15788bd76 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000292728}] Aliases:map[]}"
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.796531353Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.808646807Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:504904255cfc8ac4cfac4c4286ff6cee2bc82feb18e8db0637138c39a9686809 UID:05390c6f-b2aa-4701-8a3d-9119282e9b94 NetNS:/var/run/netns/14c1d9f2-bfc0-46a4-9e74-ecd15788bd76 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000292728}] Aliases:map[]}"
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.808772424Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.809673092Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.810861936Z" level=info msg="Ran pod sandbox 504904255cfc8ac4cfac4c4286ff6cee2bc82feb18e8db0637138c39a9686809 with infra container: default/busybox/POD" id=e9aec48a-e0e2-4b92-9814-803b49c942db name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.812295738Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=906a398e-4aa3-4b22-a5ca-981040defffe name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.812430817Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=906a398e-4aa3-4b22-a5ca-981040defffe name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.812485336Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=906a398e-4aa3-4b22-a5ca-981040defffe name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.81337699Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6c7893bc-800a-4d3f-95d6-8f129c2231bf name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:41 embed-certs-529341 crio[775]: time="2025-11-23T09:07:41.816539364Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.802367102Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6c7893bc-800a-4d3f-95d6-8f129c2231bf name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.803134005Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f4e3a8bd-c4a2-4201-b710-43cf4534e32a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.804553868Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=92cb7fba-0100-41e0-8ffd-8d874257843a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.808462991Z" level=info msg="Creating container: default/busybox/busybox" id=ac2735cf-f410-440a-b0e5-2d374352f380 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.808583537Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.812858294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.813296292Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.851093685Z" level=info msg="Created container 6644e6e35537ec65bf0449d07a6f9c7c9c6c60f2bbf6d00385de53d266e66750: default/busybox/busybox" id=ac2735cf-f410-440a-b0e5-2d374352f380 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.851596755Z" level=info msg="Starting container: 6644e6e35537ec65bf0449d07a6f9c7c9c6c60f2bbf6d00385de53d266e66750" id=77844337-67eb-4240-bcc6-d6f8e58473c5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:07:43 embed-certs-529341 crio[775]: time="2025-11-23T09:07:43.853373209Z" level=info msg="Started container" PID=1904 containerID=6644e6e35537ec65bf0449d07a6f9c7c9c6c60f2bbf6d00385de53d266e66750 description=default/busybox/busybox id=77844337-67eb-4240-bcc6-d6f8e58473c5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=504904255cfc8ac4cfac4c4286ff6cee2bc82feb18e8db0637138c39a9686809
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	6644e6e35537e       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   9 seconds ago       Running             busybox                   0                   504904255cfc8       busybox                                      default
	4cdaa3a069d9b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      14 seconds ago      Running             coredns                   0                   4768c39db9643       coredns-66bc5c9577-k4bmj                     kube-system
	50c12036dc729       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   38f2b7a624cbc       storage-provisioner                          kube-system
	59c8523437edf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   a1993db83bdf7       kindnet-twlcq                                kube-system
	7921acc552df5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   c40cf8086eebf       kube-proxy-xfwhk                             kube-system
	8d4b85c1e04ea       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   d9beec4de0515       etcd-embed-certs-529341                      kube-system
	5b3965d0035ed       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   472c3346ad5dd       kube-apiserver-embed-certs-529341            kube-system
	174f9046dce25       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   06b83c639b900       kube-controller-manager-embed-certs-529341   kube-system
	eb9d1fc5071ff       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   d09401708c905       kube-scheduler-embed-certs-529341            kube-system
	
	
	==> coredns [4cdaa3a069d9b398246aee655cf8d9ecaf67cb48314772071c97f69188c264f7] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33374 - 3584 "HINFO IN 4646067560297301992.7258299850148854626. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.069346184s
	
	
	==> describe nodes <==
	Name:               embed-certs-529341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-529341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-529341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_07_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:07:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-529341
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:07:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:07:51 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:07:51 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:07:51 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:07:51 +0000   Sun, 23 Nov 2025 09:07:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-529341
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                98603fac-552b-4d14-ae49-954d6ab02bae
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         12s
	  kube-system                 coredns-66bc5c9577-k4bmj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     26s
	  kube-system                 etcd-embed-certs-529341                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-twlcq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-embed-certs-529341             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-embed-certs-529341    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-xfwhk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-embed-certs-529341             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 25s   kube-proxy       
	  Normal  Starting                 32s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node embed-certs-529341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node embed-certs-529341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node embed-certs-529341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27s   node-controller  Node embed-certs-529341 event: Registered Node embed-certs-529341 in Controller
	  Normal  NodeReady                15s   kubelet          Node embed-certs-529341 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [8d4b85c1e04ea7502f152c44bbff8570366f58beebbf2b0a8ff87682cbe35a1f] <==
	{"level":"warn","ts":"2025-11-23T09:07:18.424405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.438344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.446237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.458695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.466235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.473301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.482092Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.490346Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.497882Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.504726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.512691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.521436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39216","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.537006Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39230","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.545891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.561703Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.581647Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.586405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.593578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.601775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.608769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.616907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.629829Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.636809Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.646427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:18.700365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39486","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:07:53 up  1:50,  0 user,  load average: 5.65, 4.24, 2.69
	Linux embed-certs-529341 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [59c8523437edf09aa6f7e263555a76b85d399591a44536005ba7eff1c7f8683d] <==
	I1123 09:07:27.678141       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:07:27.678463       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 09:07:27.678639       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:07:27.678663       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:07:27.678694       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:07:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:07:27.974161       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:07:27.974197       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:07:27.974212       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:07:27.976372       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:07:28.274766       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:07:28.274797       1 metrics.go:72] Registering metrics
	I1123 09:07:28.274867       1 controller.go:711] "Syncing nftables rules"
	I1123 09:07:37.977117       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:07:37.977155       1 main.go:301] handling current node
	I1123 09:07:47.976124       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:07:47.976163       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5b3965d0035ed321ef49a07622fb76da975715b0ed578473c123912d97984232] <==
	I1123 09:07:19.238629       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1123 09:07:19.238942       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:07:19.243635       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:19.243824       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:07:19.248046       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:19.248214       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:07:19.441506       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:07:20.142692       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:07:20.146512       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:07:20.146531       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:07:20.717515       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:07:20.762803       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:07:20.845500       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:07:20.852806       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1123 09:07:20.853887       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:07:20.858256       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:07:21.178056       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:07:21.637513       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:07:21.646037       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:07:21.654061       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:07:26.331107       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:07:27.030431       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 09:07:27.134534       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:27.137881       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1123 09:07:51.582816       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:58194: use of closed network connection
	
	
	==> kube-controller-manager [174f9046dce25071c6957d13847782bd6763e3646ada4def964e8128eb103450] <==
	I1123 09:07:26.178048       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:07:26.178079       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:07:26.178088       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:07:26.178101       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:07:26.178310       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:07:26.178952       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:07:26.179095       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:07:26.180213       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:07:26.180219       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:07:26.180597       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:07:26.182020       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:07:26.182072       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:07:26.182107       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:07:26.182113       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:07:26.182119       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:07:26.183578       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:07:26.188215       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:07:26.190053       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:07:26.194746       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:07:26.197920       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:07:26.200166       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:07:26.202371       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-529341" podCIDRs=["10.244.0.0/24"]
	I1123 09:07:26.206662       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:07:26.211938       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:07:41.130369       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [7921acc552df50de668f85b246bab497bca2a3e04efd0259bc70230863e22e8f] <==
	I1123 09:07:27.558223       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:07:27.623347       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:07:27.723986       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:07:27.724038       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 09:07:27.724152       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:07:27.747865       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:27.747925       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:07:27.754613       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:07:27.755077       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:07:27.755133       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:27.756740       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:07:27.756766       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:07:27.756795       1 config.go:200] "Starting service config controller"
	I1123 09:07:27.756803       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:07:27.761180       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:07:27.761216       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:07:27.761245       1 config.go:309] "Starting node config controller"
	I1123 09:07:27.761263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:07:27.761273       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:07:27.857443       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:07:27.857443       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:07:27.861526       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [eb9d1fc5071ff2ddfaed5fefe57ee3c968647e530c5dabe1ac554b286f5211c4] <==
	E1123 09:07:19.203790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:07:19.203819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:07:19.203877       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:07:19.203891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:07:19.203908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:07:19.204008       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:07:19.204030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:07:19.204043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:07:19.204084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:07:19.204149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:07:19.204156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:07:19.204219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:07:20.097232       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:07:20.206382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:07:20.252269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:07:20.367202       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:07:20.378618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:07:20.382200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:07:20.406764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:07:20.429713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:07:20.483569       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:07:20.483642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:07:20.545300       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:07:20.616200       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 09:07:22.600276       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:07:22 embed-certs-529341 kubelet[1309]: I1123 09:07:22.518519    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-529341" podStartSLOduration=1.518492924 podStartE2EDuration="1.518492924s" podCreationTimestamp="2025-11-23 09:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:22.508733151 +0000 UTC m=+1.134252575" watchObservedRunningTime="2025-11-23 09:07:22.518492924 +0000 UTC m=+1.144012348"
	Nov 23 09:07:22 embed-certs-529341 kubelet[1309]: I1123 09:07:22.518715    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-529341" podStartSLOduration=1.5187015860000002 podStartE2EDuration="1.518701586s" podCreationTimestamp="2025-11-23 09:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:22.518613085 +0000 UTC m=+1.144132509" watchObservedRunningTime="2025-11-23 09:07:22.518701586 +0000 UTC m=+1.144221007"
	Nov 23 09:07:22 embed-certs-529341 kubelet[1309]: I1123 09:07:22.528788    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-529341" podStartSLOduration=1.5287674930000001 podStartE2EDuration="1.528767493s" podCreationTimestamp="2025-11-23 09:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:22.528414588 +0000 UTC m=+1.153934012" watchObservedRunningTime="2025-11-23 09:07:22.528767493 +0000 UTC m=+1.154286917"
	Nov 23 09:07:22 embed-certs-529341 kubelet[1309]: I1123 09:07:22.537103    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-529341" podStartSLOduration=1.5370816390000002 podStartE2EDuration="1.537081639s" podCreationTimestamp="2025-11-23 09:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:22.536916402 +0000 UTC m=+1.162435824" watchObservedRunningTime="2025-11-23 09:07:22.537081639 +0000 UTC m=+1.162601058"
	Nov 23 09:07:26 embed-certs-529341 kubelet[1309]: I1123 09:07:26.295880    1309 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:07:26 embed-certs-529341 kubelet[1309]: I1123 09:07:26.296756    1309 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084038    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86a6640d-80fe-45a3-b48b-d2577d222ccf-kube-proxy\") pod \"kube-proxy-xfwhk\" (UID: \"86a6640d-80fe-45a3-b48b-d2577d222ccf\") " pod="kube-system/kube-proxy-xfwhk"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084255    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45682d16-1f1e-4733-8a6b-31cf7cdfa5bd-xtables-lock\") pod \"kindnet-twlcq\" (UID: \"45682d16-1f1e-4733-8a6b-31cf7cdfa5bd\") " pod="kube-system/kindnet-twlcq"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084355    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45682d16-1f1e-4733-8a6b-31cf7cdfa5bd-lib-modules\") pod \"kindnet-twlcq\" (UID: \"45682d16-1f1e-4733-8a6b-31cf7cdfa5bd\") " pod="kube-system/kindnet-twlcq"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084438    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z47f\" (UniqueName: \"kubernetes.io/projected/86a6640d-80fe-45a3-b48b-d2577d222ccf-kube-api-access-6z47f\") pod \"kube-proxy-xfwhk\" (UID: \"86a6640d-80fe-45a3-b48b-d2577d222ccf\") " pod="kube-system/kube-proxy-xfwhk"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084507    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/45682d16-1f1e-4733-8a6b-31cf7cdfa5bd-cni-cfg\") pod \"kindnet-twlcq\" (UID: \"45682d16-1f1e-4733-8a6b-31cf7cdfa5bd\") " pod="kube-system/kindnet-twlcq"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084576    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pv4v\" (UniqueName: \"kubernetes.io/projected/45682d16-1f1e-4733-8a6b-31cf7cdfa5bd-kube-api-access-6pv4v\") pod \"kindnet-twlcq\" (UID: \"45682d16-1f1e-4733-8a6b-31cf7cdfa5bd\") " pod="kube-system/kindnet-twlcq"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084652    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a6640d-80fe-45a3-b48b-d2577d222ccf-xtables-lock\") pod \"kube-proxy-xfwhk\" (UID: \"86a6640d-80fe-45a3-b48b-d2577d222ccf\") " pod="kube-system/kube-proxy-xfwhk"
	Nov 23 09:07:27 embed-certs-529341 kubelet[1309]: I1123 09:07:27.084679    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a6640d-80fe-45a3-b48b-d2577d222ccf-lib-modules\") pod \"kube-proxy-xfwhk\" (UID: \"86a6640d-80fe-45a3-b48b-d2577d222ccf\") " pod="kube-system/kube-proxy-xfwhk"
	Nov 23 09:07:28 embed-certs-529341 kubelet[1309]: I1123 09:07:28.513490    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xfwhk" podStartSLOduration=1.5134651 podStartE2EDuration="1.5134651s" podCreationTimestamp="2025-11-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:28.513432066 +0000 UTC m=+7.138951489" watchObservedRunningTime="2025-11-23 09:07:28.5134651 +0000 UTC m=+7.138984527"
	Nov 23 09:07:28 embed-certs-529341 kubelet[1309]: I1123 09:07:28.513621    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-twlcq" podStartSLOduration=1.513612133 podStartE2EDuration="1.513612133s" podCreationTimestamp="2025-11-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:27.545578576 +0000 UTC m=+6.171098001" watchObservedRunningTime="2025-11-23 09:07:28.513612133 +0000 UTC m=+7.139131557"
	Nov 23 09:07:38 embed-certs-529341 kubelet[1309]: I1123 09:07:38.364934    1309 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:07:38 embed-certs-529341 kubelet[1309]: I1123 09:07:38.465769    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c60e7298-2b0f-49f5-afde-b97e4bc8287d-tmp\") pod \"storage-provisioner\" (UID: \"c60e7298-2b0f-49f5-afde-b97e4bc8287d\") " pod="kube-system/storage-provisioner"
	Nov 23 09:07:38 embed-certs-529341 kubelet[1309]: I1123 09:07:38.465819    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0676d3db-d11b-433f-9c17-6131468d109d-config-volume\") pod \"coredns-66bc5c9577-k4bmj\" (UID: \"0676d3db-d11b-433f-9c17-6131468d109d\") " pod="kube-system/coredns-66bc5c9577-k4bmj"
	Nov 23 09:07:38 embed-certs-529341 kubelet[1309]: I1123 09:07:38.465858    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8ndh\" (UniqueName: \"kubernetes.io/projected/0676d3db-d11b-433f-9c17-6131468d109d-kube-api-access-m8ndh\") pod \"coredns-66bc5c9577-k4bmj\" (UID: \"0676d3db-d11b-433f-9c17-6131468d109d\") " pod="kube-system/coredns-66bc5c9577-k4bmj"
	Nov 23 09:07:38 embed-certs-529341 kubelet[1309]: I1123 09:07:38.465950    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpff9\" (UniqueName: \"kubernetes.io/projected/c60e7298-2b0f-49f5-afde-b97e4bc8287d-kube-api-access-jpff9\") pod \"storage-provisioner\" (UID: \"c60e7298-2b0f-49f5-afde-b97e4bc8287d\") " pod="kube-system/storage-provisioner"
	Nov 23 09:07:39 embed-certs-529341 kubelet[1309]: I1123 09:07:39.547605    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-k4bmj" podStartSLOduration=12.547582003 podStartE2EDuration="12.547582003s" podCreationTimestamp="2025-11-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:39.547519104 +0000 UTC m=+18.173038528" watchObservedRunningTime="2025-11-23 09:07:39.547582003 +0000 UTC m=+18.173101429"
	Nov 23 09:07:39 embed-certs-529341 kubelet[1309]: I1123 09:07:39.547834    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.547823661 podStartE2EDuration="12.547823661s" podCreationTimestamp="2025-11-23 09:07:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:39.537493545 +0000 UTC m=+18.163012968" watchObservedRunningTime="2025-11-23 09:07:39.547823661 +0000 UTC m=+18.173343085"
	Nov 23 09:07:41 embed-certs-529341 kubelet[1309]: I1123 09:07:41.583198    1309 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g6fh\" (UniqueName: \"kubernetes.io/projected/05390c6f-b2aa-4701-8a3d-9119282e9b94-kube-api-access-2g6fh\") pod \"busybox\" (UID: \"05390c6f-b2aa-4701-8a3d-9119282e9b94\") " pod="default/busybox"
	Nov 23 09:07:44 embed-certs-529341 kubelet[1309]: I1123 09:07:44.553287    1309 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.562182816 podStartE2EDuration="3.553267108s" podCreationTimestamp="2025-11-23 09:07:41 +0000 UTC" firstStartedPulling="2025-11-23 09:07:41.81285678 +0000 UTC m=+20.438376183" lastFinishedPulling="2025-11-23 09:07:43.803941072 +0000 UTC m=+22.429460475" observedRunningTime="2025-11-23 09:07:44.552906765 +0000 UTC m=+23.178426190" watchObservedRunningTime="2025-11-23 09:07:44.553267108 +0000 UTC m=+23.178786531"
	
	
	==> storage-provisioner [50c12036dc729d9603dc1e29f4ca838b0d6b7f8a54ae8be5ea98711ea5735544] <==
	I1123 09:07:38.814654       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:07:38.826136       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:07:38.826907       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:07:38.830149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:38.839020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:07:38.839301       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:07:38.839379       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0e5db35e-1f56-4468-b2dc-3282fdc016b1", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-529341_1e0854cc-4674-4c37-ab88-821af9d3b8bc became leader
	I1123 09:07:38.839474       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-529341_1e0854cc-4674-4c37-ab88-821af9d3b8bc!
	W1123 09:07:38.844790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:38.855899       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:07:38.939811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-529341_1e0854cc-4674-4c37-ab88-821af9d3b8bc!
	W1123 09:07:40.858740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:40.863122       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:42.866374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:42.871099       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:44.874028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:44.878192       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:46.881838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:46.887412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:48.891688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:48.895526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:50.898830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:50.902731       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:52.906818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:52.912279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-529341 -n embed-certs-529341
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-529341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (281.119096ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-602386 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-602386 describe deploy/metrics-server -n kube-system: exit status 1 (71.154048ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-602386 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-602386
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-602386:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3",
	        "Created": "2025-11-23T09:07:12.808038368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 402274,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:07:12.845449718Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/hosts",
	        "LogPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3-json.log",
	        "Name": "/default-k8s-diff-port-602386",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-602386:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-602386",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3",
	                "LowerDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-602386",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-602386/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-602386",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-602386",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-602386",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "2cf3f246a990c5c8fa6618f7669eb3c8c7018e31df349df6bc06609d0dd99b52",
	            "SandboxKey": "/var/run/docker/netns/2cf3f246a990",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-602386": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d9296937e29fbbdf6c66a1bc434a999db9b649eec0fa16933c388a9a19b340fe",
	                    "EndpointID": "a730547af600d8bd1c2c378e34fff045ba495fac9758f985fd33202659c5f474",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "82:48:78:e1:0a:5a",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-602386",
	                        "6c3d05e12551"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-602386 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-602386 logs -n 25: (1.019528146s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-741183 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                          │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cri-dockerd --version                                                                                                                                                                                                   │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ ssh     │ -p bridge-741183 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo containerd config dump                                                                                                                                                                                                  │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo crio config                                                                                                                                                                                                             │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p bridge-741183                                                                                                                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                                                                                               │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p no-preload-619589 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:07:47
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:07:47.433038  409946 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:07:47.433329  409946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:47.433340  409946 out.go:374] Setting ErrFile to fd 2...
	I1123 09:07:47.433346  409946 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:07:47.433568  409946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:07:47.434047  409946 out.go:368] Setting JSON to false
	I1123 09:07:47.435256  409946 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6607,"bootTime":1763882260,"procs":326,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:07:47.435315  409946 start.go:143] virtualization: kvm guest
	I1123 09:07:47.437173  409946 out.go:179] * [no-preload-619589] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:07:47.438290  409946 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:07:47.438300  409946 notify.go:221] Checking for updates...
	I1123 09:07:47.440719  409946 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:07:47.441714  409946 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:07:47.442677  409946 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:07:47.443689  409946 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:07:47.444737  409946 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:07:47.446257  409946 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:47.446828  409946 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:07:47.470401  409946 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:07:47.470578  409946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:47.531859  409946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:07:47.520365502 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:47.532018  409946 docker.go:319] overlay module found
	I1123 09:07:47.533728  409946 out.go:179] * Using the docker driver based on existing profile
	I1123 09:07:47.534776  409946 start.go:309] selected driver: docker
	I1123 09:07:47.534793  409946 start.go:927] validating driver "docker" against &{Name:no-preload-619589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-619589 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:47.534880  409946 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:07:47.535511  409946 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:07:47.596171  409946 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:07:47.586731934 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:07:47.596488  409946 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:47.596521  409946 cni.go:84] Creating CNI manager for ""
	I1123 09:07:47.596575  409946 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:47.596642  409946 start.go:353] cluster config:
	{Name:no-preload-619589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-619589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:47.598398  409946 out.go:179] * Starting "no-preload-619589" primary control-plane node in "no-preload-619589" cluster
	I1123 09:07:47.599465  409946 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:07:47.600568  409946 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:07:47.601700  409946 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:47.601794  409946 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:07:47.601841  409946 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/config.json ...
	I1123 09:07:47.602158  409946 cache.go:107] acquiring lock: {Name:mk7ecb1d61353190a66ac7e6ba6d7eceff124308 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602188  409946 cache.go:107] acquiring lock: {Name:mkcd32e302108894f2df717f3cf5b3ecc1441854 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602216  409946 cache.go:107] acquiring lock: {Name:mk8d2515122be33e67ac10c187be7568a955083e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602234  409946 cache.go:107] acquiring lock: {Name:mk379accc7020b7a3caf0ec5d82ca28cbbc76a2d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602306  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1123 09:07:47.602309  409946 cache.go:107] acquiring lock: {Name:mk4c9dc3c03a83bc838f3aacfba91088513abdd2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602331  409946 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 163.138µs
	I1123 09:07:47.602329  409946 cache.go:107] acquiring lock: {Name:mkd3cdfd6b4407abefc43a25dbf6321eab12e5b3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602348  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1123 09:07:47.602356  409946 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1123 09:07:47.602287  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1123 09:07:47.602361  409946 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 164.806µs
	I1123 09:07:47.602381  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1123 09:07:47.602394  409946 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1123 09:07:47.602396  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1123 09:07:47.602267  409946 cache.go:107] acquiring lock: {Name:mk029a98d17710f60c9431e6478449e7182acb86 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602402  409946 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 99.714µs
	I1123 09:07:47.602406  409946 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 213.953µs
	I1123 09:07:47.602413  409946 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1123 09:07:47.602416  409946 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1123 09:07:47.602377  409946 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 236.623µs
	I1123 09:07:47.602428  409946 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1123 09:07:47.602434  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1123 09:07:47.602296  409946 cache.go:107] acquiring lock: {Name:mk535005b718e5b2e2a19960fd3637ce47e59ae7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.602434  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1123 09:07:47.602442  409946 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 220.718µs
	I1123 09:07:47.602451  409946 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1123 09:07:47.602448  409946 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 122.924µs
	I1123 09:07:47.602468  409946 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1123 09:07:47.602471  409946 cache.go:115] /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1123 09:07:47.602484  409946 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 195.122µs
	I1123 09:07:47.602495  409946 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1123 09:07:47.602508  409946 cache.go:87] Successfully saved all images to host disk.
	I1123 09:07:47.625922  409946 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:07:47.625947  409946 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:07:47.625995  409946 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:07:47.626046  409946 start.go:360] acquireMachinesLock for no-preload-619589: {Name:mk679054b670a8ea923c71659f0f4888c22bf79b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:07:47.626120  409946 start.go:364] duration metric: took 51.356µs to acquireMachinesLock for "no-preload-619589"
	I1123 09:07:47.626142  409946 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:07:47.626148  409946 fix.go:54] fixHost starting: 
	I1123 09:07:47.626445  409946 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:07:47.649066  409946 fix.go:112] recreateIfNeeded on no-preload-619589: state=Stopped err=<nil>
	W1123 09:07:47.649104  409946 fix.go:138] unexpected machine state, will restart: <nil>
	W1123 09:07:45.271169  401015 node_ready.go:57] node "default-k8s-diff-port-602386" has "Ready":"False" status (will retry)
	W1123 09:07:47.271685  401015 node_ready.go:57] node "default-k8s-diff-port-602386" has "Ready":"False" status (will retry)
	I1123 09:07:47.772070  401015 node_ready.go:49] node "default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:47.772100  401015 node_ready.go:38] duration metric: took 12.004063012s for node "default-k8s-diff-port-602386" to be "Ready" ...
	I1123 09:07:47.772120  401015 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:07:47.772173  401015 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:47.788797  401015 api_server.go:72] duration metric: took 12.300702565s to wait for apiserver process to appear ...
	I1123 09:07:47.788827  401015 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:07:47.788852  401015 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1123 09:07:47.793700  401015 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1123 09:07:47.794993  401015 api_server.go:141] control plane version: v1.34.1
	I1123 09:07:47.795026  401015 api_server.go:131] duration metric: took 6.189285ms to wait for apiserver health ...
	I1123 09:07:47.795038  401015 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:07:47.801006  401015 system_pods.go:59] 8 kube-system pods found
	I1123 09:07:47.801056  401015 system_pods.go:61] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:47.801064  401015 system_pods.go:61] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:47.801072  401015 system_pods.go:61] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:47.801078  401015 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:47.802944  401015 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:47.802978  401015 system_pods.go:61] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:47.802987  401015 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:47.803000  401015 system_pods.go:61] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:47.803009  401015 system_pods.go:74] duration metric: took 7.964015ms to wait for pod list to return data ...
	I1123 09:07:47.803032  401015 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:07:47.811703  401015 default_sa.go:45] found service account: "default"
	I1123 09:07:47.811744  401015 default_sa.go:55] duration metric: took 8.703815ms for default service account to be created ...
	I1123 09:07:47.811764  401015 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:07:47.814987  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:47.815030  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:47.815039  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:47.815048  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:47.815054  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:47.815060  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:47.815065  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:47.815070  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:47.815075  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:47.815111  401015 retry.go:31] will retry after 199.782533ms: missing components: kube-dns
	I1123 09:07:48.022278  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:48.022316  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:48.022326  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:48.022335  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:48.022341  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:48.022347  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:48.022353  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:48.022359  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:48.022368  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:48.022388  401015 retry.go:31] will retry after 299.362918ms: missing components: kube-dns
	I1123 09:07:48.326631  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:48.326678  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:48.326686  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:48.326694  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:48.326701  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:48.326707  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:48.326713  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:48.326718  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:48.326726  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:48.326775  401015 retry.go:31] will retry after 405.595607ms: missing components: kube-dns
	I1123 09:07:48.736501  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:48.736531  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:07:48.736536  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:48.736542  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:48.736546  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:48.736551  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:48.736555  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:48.736558  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:48.736565  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:07:48.736584  401015 retry.go:31] will retry after 597.521925ms: missing components: kube-dns
	I1123 09:07:49.338640  401015 system_pods.go:86] 8 kube-system pods found
	I1123 09:07:49.338677  401015 system_pods.go:89] "coredns-66bc5c9577-64rdm" [47d854af-a566-4a34-a2aa-c7e774b7349f] Running
	I1123 09:07:49.338684  401015 system_pods.go:89] "etcd-default-k8s-diff-port-602386" [8c784536-b148-44b8-b699-d96e8396249b] Running
	I1123 09:07:49.338690  401015 system_pods.go:89] "kindnet-kqj66" [33d86c85-7de0-42a9-90af-50ba26b9c963] Running
	I1123 09:07:49.338694  401015 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-602386" [1e21b240-758f-44a7-9892-64bcd5868d3c] Running
	I1123 09:07:49.338698  401015 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-602386" [458d9abb-1c23-41f9-ad60-30086af0728a] Running
	I1123 09:07:49.338701  401015 system_pods.go:89] "kube-proxy-wnrqx" [0c0df979-c169-4565-8362-dca7550a80f5] Running
	I1123 09:07:49.338704  401015 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-602386" [530a6482-dbd1-47d7-bd47-9dcef0947a43] Running
	I1123 09:07:49.338707  401015 system_pods.go:89] "storage-provisioner" [68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f] Running
	I1123 09:07:49.338715  401015 system_pods.go:126] duration metric: took 1.526943774s to wait for k8s-apps to be running ...
	I1123 09:07:49.338723  401015 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:07:49.338767  401015 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:07:49.351457  401015 system_svc.go:56] duration metric: took 12.724784ms WaitForService to wait for kubelet
	I1123 09:07:49.351488  401015 kubeadm.go:587] duration metric: took 13.863399853s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:07:49.351509  401015 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:07:49.354278  401015 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:07:49.354302  401015 node_conditions.go:123] node cpu capacity is 8
	I1123 09:07:49.354317  401015 node_conditions.go:105] duration metric: took 2.803369ms to run NodePressure ...
	I1123 09:07:49.354329  401015 start.go:242] waiting for startup goroutines ...
	I1123 09:07:49.354338  401015 start.go:247] waiting for cluster config update ...
	I1123 09:07:49.354348  401015 start.go:256] writing updated cluster config ...
	I1123 09:07:49.354580  401015 ssh_runner.go:195] Run: rm -f paused
	I1123 09:07:49.358289  401015 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:49.361627  401015 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.365494  401015 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:07:49.365518  401015 pod_ready.go:86] duration metric: took 3.86978ms for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.367291  401015 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.370769  401015 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:49.370786  401015 pod_ready.go:86] duration metric: took 3.474711ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.372448  401015 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.375693  401015 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:49.375710  401015 pod_ready.go:86] duration metric: took 3.240891ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.377493  401015 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.762835  401015 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:49.762861  401015 pod_ready.go:86] duration metric: took 385.352648ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:49.963351  401015 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.362793  401015 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:07:50.362822  401015 pod_ready.go:86] duration metric: took 399.44633ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.563494  401015 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.962787  401015 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:07:50.962814  401015 pod_ready.go:86] duration metric: took 399.291647ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:07:50.962828  401015 pod_ready.go:40] duration metric: took 1.604507271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:07:51.006685  401015 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:07:51.008507  401015 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	W1123 09:07:46.419800  406807 pod_ready.go:104] pod "coredns-5dd5756b68-whp8m" is not "Ready", error: <nil>
	W1123 09:07:48.919893  406807 pod_ready.go:104] pod "coredns-5dd5756b68-whp8m" is not "Ready", error: <nil>
	I1123 09:07:47.650855  409946 out.go:252] * Restarting existing docker container for "no-preload-619589" ...
	I1123 09:07:47.650941  409946 cli_runner.go:164] Run: docker start no-preload-619589
	I1123 09:07:47.959520  409946 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:07:47.987746  409946 kic.go:430] container "no-preload-619589" state is running.
	I1123 09:07:47.988264  409946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-619589
	I1123 09:07:48.011667  409946 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/config.json ...
	I1123 09:07:48.012253  409946 machine.go:94] provisionDockerMachine start ...
	I1123 09:07:48.012338  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:48.040106  409946 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:48.040539  409946 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 09:07:48.040564  409946 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:07:48.041366  409946 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59760->127.0.0.1:33113: read: connection reset by peer
	I1123 09:07:51.194895  409946 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-619589
	
	I1123 09:07:51.194930  409946 ubuntu.go:182] provisioning hostname "no-preload-619589"
	I1123 09:07:51.195017  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:51.213356  409946 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:51.213588  409946 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 09:07:51.213607  409946 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-619589 && echo "no-preload-619589" | sudo tee /etc/hostname
	I1123 09:07:51.366680  409946 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-619589
	
	I1123 09:07:51.366766  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:51.386543  409946 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:51.386791  409946 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 09:07:51.386812  409946 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-619589' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-619589/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-619589' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:07:51.534081  409946 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:07:51.534115  409946 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:07:51.534142  409946 ubuntu.go:190] setting up certificates
	I1123 09:07:51.534155  409946 provision.go:84] configureAuth start
	I1123 09:07:51.534213  409946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-619589
	I1123 09:07:51.554379  409946 provision.go:143] copyHostCerts
	I1123 09:07:51.554447  409946 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:07:51.554472  409946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:07:51.554575  409946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:07:51.554693  409946 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:07:51.554705  409946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:07:51.554750  409946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:07:51.554835  409946 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:07:51.554846  409946 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:07:51.554885  409946 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:07:51.555011  409946 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.no-preload-619589 san=[127.0.0.1 192.168.85.2 localhost minikube no-preload-619589]
	I1123 09:07:51.577372  409946 provision.go:177] copyRemoteCerts
	I1123 09:07:51.577442  409946 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:07:51.577495  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:51.597840  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:51.703021  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:07:51.720117  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:07:51.737866  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:07:51.755697  409946 provision.go:87] duration metric: took 221.527249ms to configureAuth
	I1123 09:07:51.755731  409946 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:07:51.755885  409946 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:51.756068  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:51.774181  409946 main.go:143] libmachine: Using SSH client type: native
	I1123 09:07:51.774463  409946 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33113 <nil> <nil>}
	I1123 09:07:51.774501  409946 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:07:52.141850  409946 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:07:52.141885  409946 machine.go:97] duration metric: took 4.12961381s to provisionDockerMachine
	I1123 09:07:52.141914  409946 start.go:293] postStartSetup for "no-preload-619589" (driver="docker")
	I1123 09:07:52.141929  409946 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:07:52.142018  409946 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:07:52.142081  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:52.164262  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:52.273635  409946 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:07:52.277592  409946 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:07:52.277628  409946 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:07:52.277656  409946 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:07:52.277725  409946 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:07:52.277826  409946 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:07:52.277939  409946 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:07:52.286435  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:07:52.304356  409946 start.go:296] duration metric: took 162.426869ms for postStartSetup
	I1123 09:07:52.304427  409946 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:07:52.304474  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:52.324087  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:52.427389  409946 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:07:52.432432  409946 fix.go:56] duration metric: took 4.806277023s for fixHost
	I1123 09:07:52.432458  409946 start.go:83] releasing machines lock for "no-preload-619589", held for 4.806325896s
	I1123 09:07:52.432515  409946 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-619589
	I1123 09:07:52.451456  409946 ssh_runner.go:195] Run: cat /version.json
	I1123 09:07:52.451501  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:52.451527  409946 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:07:52.451608  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:52.471369  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:52.471621  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:52.570555  409946 ssh_runner.go:195] Run: systemctl --version
	I1123 09:07:52.633499  409946 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:07:52.671785  409946 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:07:52.677087  409946 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:07:52.677153  409946 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:07:52.685499  409946 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:07:52.685521  409946 start.go:496] detecting cgroup driver to use...
	I1123 09:07:52.685557  409946 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:07:52.685602  409946 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:07:52.700944  409946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:07:52.714715  409946 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:07:52.714758  409946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:07:52.731230  409946 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:07:52.745440  409946 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:07:52.840786  409946 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:07:52.930535  409946 docker.go:234] disabling docker service ...
	I1123 09:07:52.930600  409946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:07:52.945949  409946 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:07:52.959033  409946 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:07:53.047624  409946 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:07:53.133802  409946 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:07:53.147532  409946 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:07:53.162045  409946 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:07:53.162112  409946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:53.171552  409946 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:07:53.171624  409946 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:53.182365  409946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:53.191516  409946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:53.202091  409946 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:07:53.212523  409946 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:53.223571  409946 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:53.237244  409946 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:07:53.252281  409946 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:07:53.262511  409946 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:07:53.272567  409946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:53.388527  409946 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:07:53.571322  409946 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:07:53.571389  409946 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:07:53.575672  409946 start.go:564] Will wait 60s for crictl version
	I1123 09:07:53.575727  409946 ssh_runner.go:195] Run: which crictl
	I1123 09:07:53.579636  409946 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:07:53.605611  409946 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:07:53.605723  409946 ssh_runner.go:195] Run: crio --version
	I1123 09:07:53.638991  409946 ssh_runner.go:195] Run: crio --version
	I1123 09:07:53.670135  409946 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:07:53.671233  409946 cli_runner.go:164] Run: docker network inspect no-preload-619589 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:07:53.699519  409946 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1123 09:07:53.705314  409946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:07:53.720915  409946 kubeadm.go:884] updating cluster {Name:no-preload-619589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-619589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:07:53.721089  409946 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:07:53.721135  409946 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:07:53.767391  409946 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:07:53.767420  409946 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:07:53.767431  409946 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1123 09:07:53.767575  409946 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-619589 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-619589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:07:53.767717  409946 ssh_runner.go:195] Run: crio config
	I1123 09:07:53.828919  409946 cni.go:84] Creating CNI manager for ""
	I1123 09:07:53.828940  409946 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:07:53.828957  409946 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:07:53.829014  409946 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-619589 NodeName:no-preload-619589 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/
kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:07:53.829154  409946 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-619589"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:07:53.829215  409946 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:07:53.838374  409946 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:07:53.838444  409946 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:07:53.846935  409946 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:07:53.861751  409946 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:07:53.875768  409946 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I1123 09:07:53.892446  409946 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:07:53.896808  409946 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:07:53.907371  409946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:54.009226  409946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:07:54.025801  409946 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589 for IP: 192.168.85.2
	I1123 09:07:54.025824  409946 certs.go:195] generating shared ca certs ...
	I1123 09:07:54.025849  409946 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:54.026027  409946 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:07:54.026078  409946 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:07:54.026092  409946 certs.go:257] generating profile certs ...
	I1123 09:07:54.026192  409946 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/client.key
	I1123 09:07:54.026236  409946 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/apiserver.key.9070ee56
	I1123 09:07:54.026270  409946 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/proxy-client.key
	I1123 09:07:54.026373  409946 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:07:54.026403  409946 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:07:54.026412  409946 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:07:54.026437  409946 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:07:54.026461  409946 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:07:54.026484  409946 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:07:54.026523  409946 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:07:54.027151  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:07:54.047887  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:07:54.067545  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:07:54.087105  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:07:54.110615  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:07:54.134528  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:07:54.153065  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:07:54.181285  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/no-preload-619589/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:07:54.202515  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:07:54.223376  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:07:54.243522  409946 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:07:54.262673  409946 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:07:54.276208  409946 ssh_runner.go:195] Run: openssl version
	I1123 09:07:54.282443  409946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:07:54.291301  409946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:54.295299  409946 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:54.295352  409946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:07:54.331341  409946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:07:54.339519  409946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:07:54.348538  409946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:07:54.352062  409946 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:07:54.352107  409946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:07:54.386925  409946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:07:54.395017  409946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:07:54.403509  409946 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:07:54.407122  409946 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:07:54.407181  409946 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:07:54.443910  409946 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:07:54.452548  409946 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:07:54.456541  409946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:07:54.491690  409946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:07:54.526305  409946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:07:54.576762  409946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:07:54.620227  409946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:07:54.674844  409946 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:07:54.729315  409946 kubeadm.go:401] StartCluster: {Name:no-preload-619589 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-619589 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:07:54.729421  409946 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:07:54.729503  409946 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:07:54.765376  409946 cri.go:89] found id: "a3bc253f74d935c63450cd3db07c274df85d3f1746da99b79e94bf15141d4c16"
	I1123 09:07:54.765402  409946 cri.go:89] found id: "6ac3ed6ad22f96a5e8a6803a48c463751843af2805ec1400ba36fedc144cf1d9"
	I1123 09:07:54.765408  409946 cri.go:89] found id: "9b89533199bb2186454a2491d3cdd6e0a13a98d889f1739695a869ff190a6ad7"
	I1123 09:07:54.765413  409946 cri.go:89] found id: "1f60fb31039bdce86058df87c7da04ea74adbafc6e245568fb6ab0413a0af065"
	I1123 09:07:54.765417  409946 cri.go:89] found id: ""
	I1123 09:07:54.765468  409946 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:07:54.778620  409946 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:07:54Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:07:54.778691  409946 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:07:54.788930  409946 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:07:54.788949  409946 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:07:54.789025  409946 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:07:54.797160  409946 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:07:54.798080  409946 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-619589" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:07:54.798736  409946 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-619589" cluster setting kubeconfig missing "no-preload-619589" context setting]
	I1123 09:07:54.799811  409946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:54.802432  409946 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:07:54.811265  409946 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.85.2
	I1123 09:07:54.811305  409946 kubeadm.go:602] duration metric: took 22.349509ms to restartPrimaryControlPlane
	I1123 09:07:54.811319  409946 kubeadm.go:403] duration metric: took 82.019175ms to StartCluster
	I1123 09:07:54.811340  409946 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:54.811423  409946 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:07:54.813997  409946 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:07:54.814295  409946 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:07:54.814370  409946 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:07:54.814479  409946 addons.go:70] Setting storage-provisioner=true in profile "no-preload-619589"
	I1123 09:07:54.814504  409946 addons.go:239] Setting addon storage-provisioner=true in "no-preload-619589"
	W1123 09:07:54.814517  409946 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:07:54.814522  409946 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:07:54.814559  409946 host.go:66] Checking if "no-preload-619589" exists ...
	I1123 09:07:54.814565  409946 addons.go:70] Setting dashboard=true in profile "no-preload-619589"
	I1123 09:07:54.814580  409946 addons.go:239] Setting addon dashboard=true in "no-preload-619589"
	W1123 09:07:54.814592  409946 addons.go:248] addon dashboard should already be in state true
	I1123 09:07:54.814623  409946 host.go:66] Checking if "no-preload-619589" exists ...
	I1123 09:07:54.815141  409946 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:07:54.815207  409946 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:07:54.815215  409946 addons.go:70] Setting default-storageclass=true in profile "no-preload-619589"
	I1123 09:07:54.815239  409946 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-619589"
	I1123 09:07:54.815585  409946 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:07:54.819542  409946 out.go:179] * Verifying Kubernetes components...
	I1123 09:07:54.821223  409946 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:07:54.841122  409946 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:07:54.842198  409946 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:07:54.843307  409946 addons.go:239] Setting addon default-storageclass=true in "no-preload-619589"
	W1123 09:07:54.843331  409946 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:07:54.843346  409946 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 09:07:51.420023  406807 pod_ready.go:104] pod "coredns-5dd5756b68-whp8m" is not "Ready", error: <nil>
	W1123 09:07:53.426349  406807 pod_ready.go:104] pod "coredns-5dd5756b68-whp8m" is not "Ready", error: <nil>
	W1123 09:07:55.924288  406807 pod_ready.go:104] pod "coredns-5dd5756b68-whp8m" is not "Ready", error: <nil>
	I1123 09:07:54.843382  409946 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:07:54.843361  409946 host.go:66] Checking if "no-preload-619589" exists ...
	I1123 09:07:54.843398  409946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:07:54.843503  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:54.843909  409946 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:07:54.844384  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:07:54.844401  409946 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:07:54.844451  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:54.873501  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:54.875267  409946 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:07:54.875287  409946 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:07:54.875343  409946 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:07:54.876119  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:54.916188  409946 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:07:55.002669  409946 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:07:55.007554  409946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:07:55.020760  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:07:55.020784  409946 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:07:55.022763  409946 node_ready.go:35] waiting up to 6m0s for node "no-preload-619589" to be "Ready" ...
	I1123 09:07:55.036128  409946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:07:55.045857  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:07:55.045879  409946 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:07:55.064307  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:07:55.064333  409946 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:07:55.084388  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:07:55.084415  409946 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:07:55.114776  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:07:55.114810  409946 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:07:55.133824  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:07:55.133955  409946 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:07:55.153887  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:07:55.153912  409946 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:07:55.176889  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:07:55.176925  409946 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:07:55.200230  409946 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:07:55.200254  409946 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:07:55.222237  409946 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:07:56.710503  409946 node_ready.go:49] node "no-preload-619589" is "Ready"
	I1123 09:07:56.710539  409946 node_ready.go:38] duration metric: took 1.68774814s for node "no-preload-619589" to be "Ready" ...
	I1123 09:07:56.710557  409946 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:07:56.710615  409946 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:07:57.418745  409946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.411138206s)
	I1123 09:07:57.418803  409946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.382638858s)
	I1123 09:07:57.418922  409946 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.196589406s)
	I1123 09:07:57.419153  409946 api_server.go:72] duration metric: took 2.604824097s to wait for apiserver process to appear ...
	I1123 09:07:57.419166  409946 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:07:57.419185  409946 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1123 09:07:57.420910  409946 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-619589 addons enable metrics-server
	
	I1123 09:07:57.424571  409946 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:07:57.424602  409946 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:07:57.431052  409946 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:07:57.432259  409946 addons.go:530] duration metric: took 2.617888963s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Nov 23 09:07:48 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:48.022744608Z" level=info msg="Starting container: 09d7bce5ea96ac2b0837f0d91327d051837c21b78835461cc97d15277cd6bb4a" id=9b3623d7-ab8f-4d0a-a501-7bebe639aa36 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:07:48 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:48.026997351Z" level=info msg="Started container" PID=1827 containerID=09d7bce5ea96ac2b0837f0d91327d051837c21b78835461cc97d15277cd6bb4a description=kube-system/coredns-66bc5c9577-64rdm/coredns id=9b3623d7-ab8f-4d0a-a501-7bebe639aa36 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5a38987fb018b9659b9f55a3c4d716839e10170ea3b0c5f14b3409c8865bb463
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.484656701Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3c5d3200-6276-417e-a43c-6dcbb971ab3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.484730965Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.490046437Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3d327b224079293ff46793598356bf3d359072a484b4f3b07aa1687d7f681908 UID:4a775da7-4f9d-4680-9fb4-7d598e9e8512 NetNS:/var/run/netns/767d2dfd-abfe-49e7-a9d0-06a7dddfc261 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000914970}] Aliases:map[]}"
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.490081448Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.499956198Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:3d327b224079293ff46793598356bf3d359072a484b4f3b07aa1687d7f681908 UID:4a775da7-4f9d-4680-9fb4-7d598e9e8512 NetNS:/var/run/netns/767d2dfd-abfe-49e7-a9d0-06a7dddfc261 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000914970}] Aliases:map[]}"
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.500131422Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.500876477Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.504260917Z" level=info msg="Ran pod sandbox 3d327b224079293ff46793598356bf3d359072a484b4f3b07aa1687d7f681908 with infra container: default/busybox/POD" id=3c5d3200-6276-417e-a43c-6dcbb971ab3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.50550921Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e84f1b86-76a7-4942-95c5-5bc73f01edc9 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.505633815Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=e84f1b86-76a7-4942-95c5-5bc73f01edc9 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.505684798Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=e84f1b86-76a7-4942-95c5-5bc73f01edc9 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.506607172Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=65711c68-d70e-4212-b9f1-28174baa8e78 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:51 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:51.508834436Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.465104496Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=65711c68-d70e-4212-b9f1-28174baa8e78 name=/runtime.v1.ImageService/PullImage
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.466050893Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d564080d-a6da-452d-a076-7c753711a053 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.468953883Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7aa94532-f632-445b-88ab-6fc50e4a7fb0 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.473272711Z" level=info msg="Creating container: default/busybox/busybox" id=30d4c3de-7a97-4098-b3da-ee9a65f34ffe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.473780867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.480398019Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.481650052Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.514573818Z" level=info msg="Created container 32a17b7de118a776908d9559f11550741d2a599acf3df4b9f2211fabf6b358bd: default/busybox/busybox" id=30d4c3de-7a97-4098-b3da-ee9a65f34ffe name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.515823184Z" level=info msg="Starting container: 32a17b7de118a776908d9559f11550741d2a599acf3df4b9f2211fabf6b358bd" id=71d1af29-05b8-4176-98b6-4a42ef0d6167 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:07:53 default-k8s-diff-port-602386 crio[771]: time="2025-11-23T09:07:53.518137465Z" level=info msg="Started container" PID=1905 containerID=32a17b7de118a776908d9559f11550741d2a599acf3df4b9f2211fabf6b358bd description=default/busybox/busybox id=71d1af29-05b8-4176-98b6-4a42ef0d6167 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d327b224079293ff46793598356bf3d359072a484b4f3b07aa1687d7f681908
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	32a17b7de118a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   3d327b2240792       busybox                                                default
	09d7bce5ea96a       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      13 seconds ago      Running             coredns                   0                   5a38987fb018b       coredns-66bc5c9577-64rdm                               kube-system
	18079a3e046d0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 seconds ago      Running             storage-provisioner       0                   637e46a94c9ce       storage-provisioner                                    kube-system
	eb91686a87eda       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      25 seconds ago      Running             kindnet-cni               0                   859a0ba879279       kindnet-kqj66                                          kube-system
	62e624f0a3e0a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      25 seconds ago      Running             kube-proxy                0                   f4bf7795285fb       kube-proxy-wnrqx                                       kube-system
	3488887733e4b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   b47461968b5b1       kube-apiserver-default-k8s-diff-port-602386            kube-system
	e0dbf7100207a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   5a4f6e00d29bc       kube-scheduler-default-k8s-diff-port-602386            kube-system
	d31dc82b027d5       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   135b949744e80       etcd-default-k8s-diff-port-602386                      kube-system
	b0aad8c47f0aa       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   b478f541c3803       kube-controller-manager-default-k8s-diff-port-602386   kube-system
	
	
	==> coredns [09d7bce5ea96ac2b0837f0d91327d051837c21b78835461cc97d15277cd6bb4a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:39873 - 62 "HINFO IN 209538208433059805.2305070407080004445. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.069819446s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-602386
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-602386
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-602386
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_07_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:07:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-602386
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:08:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:07:47 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:07:47 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:07:47 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:07:47 +0000   Sun, 23 Nov 2025 09:07:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-602386
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                080d5fdd-e379-43ff-bc41-4910fe3f507a
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-64rdm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-default-k8s-diff-port-602386                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-kqj66                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-default-k8s-diff-port-602386             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-602386    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-wnrqx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-default-k8s-diff-port-602386             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node default-k8s-diff-port-602386 event: Registered Node default-k8s-diff-port-602386 in Controller
	  Normal  NodeReady                14s                kubelet          Node default-k8s-diff-port-602386 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [d31dc82b027d5e61c3ce1de5bc51d0446fd3ebed3aa90fdea9a674e71d1bd75a] <==
	{"level":"warn","ts":"2025-11-23T09:07:27.547294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.558018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.574245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.584291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.594768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.604713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.612076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.622100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.629361Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.637471Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.645119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.653696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.660765Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.668029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.675630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.682182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.689058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.696182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.703697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.710490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.723231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.740852Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.748674Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.757322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:27.812933Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43954","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:08:01 up  1:50,  0 user,  load average: 6.08, 4.35, 2.74
	Linux default-k8s-diff-port-602386 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [eb91686a87eda43203e59309062774ca9088be180eef439f2d7084204bceabd7] <==
	I1123 09:07:36.822534       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:07:36.822774       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 09:07:36.822921       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:07:36.822935       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:07:36.822954       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:07:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:07:37.024364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:07:37.024426       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:07:37.024440       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:07:37.024603       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:07:37.118532       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1123 09:07:37.118574       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:07:37.118687       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1123 09:07:37.119241       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 09:07:38.324934       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:07:38.324995       1 metrics.go:72] Registering metrics
	I1123 09:07:38.325078       1 controller.go:711] "Syncing nftables rules"
	I1123 09:07:47.025159       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:07:47.025214       1 main.go:301] handling current node
	I1123 09:07:57.025313       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:07:57.025352       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3488887733e4b1fa9f02a91a8e5c407fff130459f0f6adaba9b71205988930bf] <==
	I1123 09:07:28.281145       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:07:28.282817       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:07:28.285037       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:28.285072       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:07:28.288934       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:28.289342       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:07:28.309424       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:07:29.186206       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:07:29.192245       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:07:29.192268       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:07:29.675010       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:07:29.714749       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:07:29.790518       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:07:29.797040       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1123 09:07:29.798315       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:07:29.802807       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:07:30.205574       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:07:30.931115       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:07:30.947449       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:07:30.957266       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:07:35.856933       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:35.862846       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:36.255427       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1123 09:07:36.304522       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1123 09:08:00.281858       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:42956: use of closed network connection
	
	
	==> kube-controller-manager [b0aad8c47f0aa3e58752c6649627d73c0fa3c3541fd2f0d83741e18d747e6b3c] <==
	I1123 09:07:35.202242       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:07:35.202292       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:07:35.202352       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:07:35.202354       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:07:35.202414       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:07:35.202778       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:07:35.202806       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:07:35.202869       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:07:35.204004       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:07:35.205202       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:07:35.205240       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:07:35.205495       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:07:35.205508       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:07:35.206180       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1123 09:07:35.206238       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1123 09:07:35.206319       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1123 09:07:35.206332       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1123 09:07:35.206340       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1123 09:07:35.208697       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:07:35.208728       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:07:35.208704       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:07:35.212830       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-602386" podCIDRs=["10.244.0.0/24"]
	I1123 09:07:35.215172       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:07:35.226792       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:07:50.123145       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [62e624f0a3e0a3170403655d23f05ead4e87f655ddae9d9e7316c691bbfabd8d] <==
	I1123 09:07:36.680001       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:07:36.753263       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:07:36.853812       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:07:36.853850       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 09:07:36.853981       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:07:36.879793       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:36.879879       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:07:36.885301       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:07:36.885591       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:07:36.885620       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:36.887041       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:07:36.887067       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:07:36.887216       1 config.go:309] "Starting node config controller"
	I1123 09:07:36.887227       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:07:36.887236       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:07:36.887309       1 config.go:200] "Starting service config controller"
	I1123 09:07:36.887320       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:07:36.888728       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:07:36.888756       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:07:36.987320       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:07:36.987414       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:07:36.989233       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e0dbf7100207a7f33d4602b9d4a6859ea4de198dcbed8be1c80d6a03a2800098] <==
	E1123 09:07:28.231095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:07:28.231138       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:07:28.231164       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:07:28.231253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:07:28.231277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:07:28.231340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:07:28.231257       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:07:28.231302       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:07:28.231278       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:07:28.231474       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:07:28.231484       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:07:28.231536       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:07:28.231552       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:07:29.056326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:07:29.139252       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1123 09:07:29.169793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:07:29.175045       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:07:29.247229       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:07:29.267507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:07:29.274644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:07:29.389247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:07:29.481903       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1123 09:07:29.481902       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:07:29.674132       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1123 09:07:31.529101       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:07:31 default-k8s-diff-port-602386 kubelet[1317]: E1123 09:07:31.836771    1317 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-default-k8s-diff-port-602386\" already exists" pod="kube-system/kube-scheduler-default-k8s-diff-port-602386"
	Nov 23 09:07:31 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:31.908360    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-602386" podStartSLOduration=1.908337558 podStartE2EDuration="1.908337558s" podCreationTimestamp="2025-11-23 09:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:31.907994172 +0000 UTC m=+1.217476598" watchObservedRunningTime="2025-11-23 09:07:31.908337558 +0000 UTC m=+1.217819985"
	Nov 23 09:07:31 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:31.909213    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-602386" podStartSLOduration=1.909198674 podStartE2EDuration="1.909198674s" podCreationTimestamp="2025-11-23 09:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:31.889956357 +0000 UTC m=+1.199438783" watchObservedRunningTime="2025-11-23 09:07:31.909198674 +0000 UTC m=+1.218681101"
	Nov 23 09:07:31 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:31.933744    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-602386" podStartSLOduration=1.9337187980000001 podStartE2EDuration="1.933718798s" podCreationTimestamp="2025-11-23 09:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:31.921695199 +0000 UTC m=+1.231177625" watchObservedRunningTime="2025-11-23 09:07:31.933718798 +0000 UTC m=+1.243201222"
	Nov 23 09:07:31 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:31.934170    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-602386" podStartSLOduration=1.9341537290000002 podStartE2EDuration="1.934153729s" podCreationTimestamp="2025-11-23 09:07:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:31.932887404 +0000 UTC m=+1.242369831" watchObservedRunningTime="2025-11-23 09:07:31.934153729 +0000 UTC m=+1.243636154"
	Nov 23 09:07:35 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:35.236539    1317 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 23 09:07:35 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:35.237347    1317 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.322759    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c0df979-c169-4565-8362-dca7550a80f5-lib-modules\") pod \"kube-proxy-wnrqx\" (UID: \"0c0df979-c169-4565-8362-dca7550a80f5\") " pod="kube-system/kube-proxy-wnrqx"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.322812    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33d86c85-7de0-42a9-90af-50ba26b9c963-lib-modules\") pod \"kindnet-kqj66\" (UID: \"33d86c85-7de0-42a9-90af-50ba26b9c963\") " pod="kube-system/kindnet-kqj66"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.322836    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c0df979-c169-4565-8362-dca7550a80f5-kube-proxy\") pod \"kube-proxy-wnrqx\" (UID: \"0c0df979-c169-4565-8362-dca7550a80f5\") " pod="kube-system/kube-proxy-wnrqx"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.322855    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/33d86c85-7de0-42a9-90af-50ba26b9c963-cni-cfg\") pod \"kindnet-kqj66\" (UID: \"33d86c85-7de0-42a9-90af-50ba26b9c963\") " pod="kube-system/kindnet-kqj66"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.322884    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33d86c85-7de0-42a9-90af-50ba26b9c963-xtables-lock\") pod \"kindnet-kqj66\" (UID: \"33d86c85-7de0-42a9-90af-50ba26b9c963\") " pod="kube-system/kindnet-kqj66"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.322912    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c0df979-c169-4565-8362-dca7550a80f5-xtables-lock\") pod \"kube-proxy-wnrqx\" (UID: \"0c0df979-c169-4565-8362-dca7550a80f5\") " pod="kube-system/kube-proxy-wnrqx"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.323008    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjb84\" (UniqueName: \"kubernetes.io/projected/0c0df979-c169-4565-8362-dca7550a80f5-kube-api-access-qjb84\") pod \"kube-proxy-wnrqx\" (UID: \"0c0df979-c169-4565-8362-dca7550a80f5\") " pod="kube-system/kube-proxy-wnrqx"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.323137    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9pnx\" (UniqueName: \"kubernetes.io/projected/33d86c85-7de0-42a9-90af-50ba26b9c963-kube-api-access-g9pnx\") pod \"kindnet-kqj66\" (UID: \"33d86c85-7de0-42a9-90af-50ba26b9c963\") " pod="kube-system/kindnet-kqj66"
	Nov 23 09:07:36 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:36.841580    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-kqj66" podStartSLOduration=0.841557255 podStartE2EDuration="841.557255ms" podCreationTimestamp="2025-11-23 09:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:36.841451905 +0000 UTC m=+6.150934331" watchObservedRunningTime="2025-11-23 09:07:36.841557255 +0000 UTC m=+6.151039681"
	Nov 23 09:07:37 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:37.125822    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wnrqx" podStartSLOduration=1.125801433 podStartE2EDuration="1.125801433s" podCreationTimestamp="2025-11-23 09:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:36.851234515 +0000 UTC m=+6.160716941" watchObservedRunningTime="2025-11-23 09:07:37.125801433 +0000 UTC m=+6.435283859"
	Nov 23 09:07:47 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:47.622724    1317 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 23 09:07:47 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:47.707334    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f-tmp\") pod \"storage-provisioner\" (UID: \"68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f\") " pod="kube-system/storage-provisioner"
	Nov 23 09:07:47 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:47.707380    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfwqk\" (UniqueName: \"kubernetes.io/projected/68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f-kube-api-access-wfwqk\") pod \"storage-provisioner\" (UID: \"68aad3cb-9d9e-4bca-9271-f4b65e2a8a9f\") " pod="kube-system/storage-provisioner"
	Nov 23 09:07:47 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:47.707415    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/47d854af-a566-4a34-a2aa-c7e774b7349f-config-volume\") pod \"coredns-66bc5c9577-64rdm\" (UID: \"47d854af-a566-4a34-a2aa-c7e774b7349f\") " pod="kube-system/coredns-66bc5c9577-64rdm"
	Nov 23 09:07:47 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:47.707443    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95nst\" (UniqueName: \"kubernetes.io/projected/47d854af-a566-4a34-a2aa-c7e774b7349f-kube-api-access-95nst\") pod \"coredns-66bc5c9577-64rdm\" (UID: \"47d854af-a566-4a34-a2aa-c7e774b7349f\") " pod="kube-system/coredns-66bc5c9577-64rdm"
	Nov 23 09:07:48 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:48.872520    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.872499801 podStartE2EDuration="13.872499801s" podCreationTimestamp="2025-11-23 09:07:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:48.872286106 +0000 UTC m=+18.181768532" watchObservedRunningTime="2025-11-23 09:07:48.872499801 +0000 UTC m=+18.181982228"
	Nov 23 09:07:48 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:48.884775    1317 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-64rdm" podStartSLOduration=12.884750247 podStartE2EDuration="12.884750247s" podCreationTimestamp="2025-11-23 09:07:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:07:48.8844335 +0000 UTC m=+18.193915926" watchObservedRunningTime="2025-11-23 09:07:48.884750247 +0000 UTC m=+18.194232673"
	Nov 23 09:07:51 default-k8s-diff-port-602386 kubelet[1317]: I1123 09:07:51.229641    1317 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w84v6\" (UniqueName: \"kubernetes.io/projected/4a775da7-4f9d-4680-9fb4-7d598e9e8512-kube-api-access-w84v6\") pod \"busybox\" (UID: \"4a775da7-4f9d-4680-9fb4-7d598e9e8512\") " pod="default/busybox"
	
	
	==> storage-provisioner [18079a3e046d0d34118f19177721250ca770def42920c11cf9e4694c5780ee82] <==
	I1123 09:07:48.035235       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:07:48.044926       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:07:48.044990       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1123 09:07:48.047381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:48.052228       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:07:48.052410       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:07:48.052613       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-602386_e80b53a5-b501-4e85-984a-e8818a59f4dd!
	I1123 09:07:48.052611       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"372e2195-5e26-4589-a356-657025d5ccfc", APIVersion:"v1", ResourceVersion:"442", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-602386_e80b53a5-b501-4e85-984a-e8818a59f4dd became leader
	W1123 09:07:48.055980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:48.059871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1123 09:07:48.152868       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-602386_e80b53a5-b501-4e85-984a-e8818a59f4dd!
	W1123 09:07:50.063822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:50.067846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:52.071799       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:52.078820       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:54.082076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:54.086733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:56.090502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:56.096428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:58.100015       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:07:58.104354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:00.108085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:00.117919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-602386 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-054094 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-054094 --alsologtostderr -v=1: exit status 80 (1.908482087s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-054094 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:08:26.735851  419184 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:08:26.736193  419184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:26.736208  419184 out.go:374] Setting ErrFile to fd 2...
	I1123 09:08:26.736217  419184 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:26.736824  419184 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:08:26.737461  419184 out.go:368] Setting JSON to false
	I1123 09:08:26.737487  419184 mustload.go:66] Loading cluster: old-k8s-version-054094
	I1123 09:08:26.738505  419184 config.go:182] Loaded profile config "old-k8s-version-054094": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1123 09:08:26.739132  419184 cli_runner.go:164] Run: docker container inspect old-k8s-version-054094 --format={{.State.Status}}
	I1123 09:08:26.769147  419184 host.go:66] Checking if "old-k8s-version-054094" exists ...
	I1123 09:08:26.769643  419184 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:26.857394  419184 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-23 09:08:26.84517456 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:26.858217  419184 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-054094 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:08:26.864470  419184 out.go:179] * Pausing node old-k8s-version-054094 ... 
	I1123 09:08:26.865627  419184 host.go:66] Checking if "old-k8s-version-054094" exists ...
	I1123 09:08:26.865939  419184 ssh_runner.go:195] Run: systemctl --version
	I1123 09:08:26.866024  419184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-054094
	I1123 09:08:26.887374  419184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33108 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/old-k8s-version-054094/id_rsa Username:docker}
	I1123 09:08:26.999451  419184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:27.013293  419184 pause.go:52] kubelet running: true
	I1123 09:08:27.013384  419184 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:08:27.248737  419184 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:08:27.249004  419184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:08:27.336623  419184 cri.go:89] found id: "a90702afed2b040fbd77498ec11afee60f27d3a30d65069dea6e6961e8118621"
	I1123 09:08:27.336653  419184 cri.go:89] found id: "629c8538dd18c46925238739061bb0f44ca62dc0ae653a849f5a698e44652b68"
	I1123 09:08:27.336676  419184 cri.go:89] found id: "f32d9a2f7dcfa4d5ba236560662bb95e5ec188a673b28df770ec09f9d9c6aac9"
	I1123 09:08:27.336682  419184 cri.go:89] found id: "3a5035af2c25e9076b679fe308a44f43a32681ba1653ba021cc6294822caf7f9"
	I1123 09:08:27.336687  419184 cri.go:89] found id: "5e74dbebbc2f09c92cc8f26c86f4a178da062712bef7fb2aa891abaf9d0ef753"
	I1123 09:08:27.336692  419184 cri.go:89] found id: "67da9dae46c0f7bf57f9dc994797c8788e6a957999afabdd876c802e5872cb68"
	I1123 09:08:27.336696  419184 cri.go:89] found id: "c7dc1d98ec4da99d3a0764984d5923c598972517dad05844b7805b9388bb5cc9"
	I1123 09:08:27.336700  419184 cri.go:89] found id: "67cdf9a216a06c548df986856a47cb4952575cfc9b63188445c10205400e34be"
	I1123 09:08:27.336705  419184 cri.go:89] found id: "f308bae766722fb5efa2c7d1616cb7025893f5d7f71c748c3370f5085550daeb"
	I1123 09:08:27.336714  419184 cri.go:89] found id: "f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c"
	I1123 09:08:27.336718  419184 cri.go:89] found id: "b7902f0397bf02fb653af022bdad06aea40eb13c6da9af1435a515c5ad12d0e1"
	I1123 09:08:27.336733  419184 cri.go:89] found id: ""
	I1123 09:08:27.336776  419184 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:08:27.351899  419184 retry.go:31] will retry after 237.713437ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:27Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:27.591133  419184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:27.613922  419184 pause.go:52] kubelet running: false
	I1123 09:08:27.614184  419184 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:08:27.850893  419184 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:08:27.851255  419184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:08:27.964546  419184 cri.go:89] found id: "a90702afed2b040fbd77498ec11afee60f27d3a30d65069dea6e6961e8118621"
	I1123 09:08:27.964593  419184 cri.go:89] found id: "629c8538dd18c46925238739061bb0f44ca62dc0ae653a849f5a698e44652b68"
	I1123 09:08:27.964601  419184 cri.go:89] found id: "f32d9a2f7dcfa4d5ba236560662bb95e5ec188a673b28df770ec09f9d9c6aac9"
	I1123 09:08:27.964607  419184 cri.go:89] found id: "3a5035af2c25e9076b679fe308a44f43a32681ba1653ba021cc6294822caf7f9"
	I1123 09:08:27.964613  419184 cri.go:89] found id: "5e74dbebbc2f09c92cc8f26c86f4a178da062712bef7fb2aa891abaf9d0ef753"
	I1123 09:08:27.964619  419184 cri.go:89] found id: "67da9dae46c0f7bf57f9dc994797c8788e6a957999afabdd876c802e5872cb68"
	I1123 09:08:27.964625  419184 cri.go:89] found id: "c7dc1d98ec4da99d3a0764984d5923c598972517dad05844b7805b9388bb5cc9"
	I1123 09:08:27.964631  419184 cri.go:89] found id: "67cdf9a216a06c548df986856a47cb4952575cfc9b63188445c10205400e34be"
	I1123 09:08:27.964636  419184 cri.go:89] found id: "f308bae766722fb5efa2c7d1616cb7025893f5d7f71c748c3370f5085550daeb"
	I1123 09:08:27.964667  419184 cri.go:89] found id: "f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c"
	I1123 09:08:27.964674  419184 cri.go:89] found id: "b7902f0397bf02fb653af022bdad06aea40eb13c6da9af1435a515c5ad12d0e1"
	I1123 09:08:27.964679  419184 cri.go:89] found id: ""
	I1123 09:08:27.964741  419184 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:08:27.985985  419184 retry.go:31] will retry after 200.779408ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:27Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:28.187422  419184 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:28.206605  419184 pause.go:52] kubelet running: false
	I1123 09:08:28.206753  419184 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:08:28.437744  419184 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:08:28.437859  419184 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:08:28.533832  419184 cri.go:89] found id: "a90702afed2b040fbd77498ec11afee60f27d3a30d65069dea6e6961e8118621"
	I1123 09:08:28.533861  419184 cri.go:89] found id: "629c8538dd18c46925238739061bb0f44ca62dc0ae653a849f5a698e44652b68"
	I1123 09:08:28.533868  419184 cri.go:89] found id: "f32d9a2f7dcfa4d5ba236560662bb95e5ec188a673b28df770ec09f9d9c6aac9"
	I1123 09:08:28.533874  419184 cri.go:89] found id: "3a5035af2c25e9076b679fe308a44f43a32681ba1653ba021cc6294822caf7f9"
	I1123 09:08:28.533878  419184 cri.go:89] found id: "5e74dbebbc2f09c92cc8f26c86f4a178da062712bef7fb2aa891abaf9d0ef753"
	I1123 09:08:28.533882  419184 cri.go:89] found id: "67da9dae46c0f7bf57f9dc994797c8788e6a957999afabdd876c802e5872cb68"
	I1123 09:08:28.533887  419184 cri.go:89] found id: "c7dc1d98ec4da99d3a0764984d5923c598972517dad05844b7805b9388bb5cc9"
	I1123 09:08:28.533891  419184 cri.go:89] found id: "67cdf9a216a06c548df986856a47cb4952575cfc9b63188445c10205400e34be"
	I1123 09:08:28.533896  419184 cri.go:89] found id: "f308bae766722fb5efa2c7d1616cb7025893f5d7f71c748c3370f5085550daeb"
	I1123 09:08:28.533904  419184 cri.go:89] found id: "f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c"
	I1123 09:08:28.533909  419184 cri.go:89] found id: "b7902f0397bf02fb653af022bdad06aea40eb13c6da9af1435a515c5ad12d0e1"
	I1123 09:08:28.533913  419184 cri.go:89] found id: ""
	I1123 09:08:28.533963  419184 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:08:28.552316  419184 out.go:203] 
	W1123 09:08:28.553505  419184 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:08:28.553527  419184 out.go:285] * 
	* 
	W1123 09:08:28.560648  419184 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:08:28.561900  419184 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-054094 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-054094
helpers_test.go:243: (dbg) docker inspect old-k8s-version-054094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3",
	        "Created": "2025-11-23T09:06:14.055238477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407032,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:07:31.536642342Z",
	            "FinishedAt": "2025-11-23T09:07:30.561167024Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/hosts",
	        "LogPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3-json.log",
	        "Name": "/old-k8s-version-054094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-054094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-054094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3",
	                "LowerDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-054094",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-054094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-054094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-054094",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-054094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7c0085e5dce4e6df2695e73598bb0cc19910327212e4ad847442c04a69b893d",
	            "SandboxKey": "/var/run/docker/netns/b7c0085e5dce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-054094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76e5790841e8d84532c8d28d1be8e40ba53fa4abb8a22eef487cc6e2d204979d",
	                    "EndpointID": "4cd1dd5db15b30b14fb0b508620442f32a28adc77e8e5c268eabbc5a1f7ccd04",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "56:3f:31:ff:e4:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-054094",
	                        "6fbb3e1692df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094: exit status 2 (455.918989ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-054094 logs -n 25: (1.499836011s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-741183 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo crio config                                                                                                                                                                                                             │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p bridge-741183                                                                                                                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                                                                                               │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p no-preload-619589 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:08:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:08:19.205185  416838 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:08:19.205478  416838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:19.205490  416838 out.go:374] Setting ErrFile to fd 2...
	I1123 09:08:19.205494  416838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:19.205722  416838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:08:19.206165  416838 out.go:368] Setting JSON to false
	I1123 09:08:19.207257  416838 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6639,"bootTime":1763882260,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:08:19.207311  416838 start.go:143] virtualization: kvm guest
	I1123 09:08:19.209393  416838 out.go:179] * [default-k8s-diff-port-602386] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:08:19.210651  416838 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:08:19.210665  416838 notify.go:221] Checking for updates...
	I1123 09:08:19.212882  416838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:08:19.214456  416838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:19.215612  416838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:08:19.216796  416838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:08:19.217884  416838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:08:19.219361  416838 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:19.220146  416838 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:08:19.250242  416838 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:08:19.250361  416838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:19.315372  416838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:08:19.303997527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:19.315465  416838 docker.go:319] overlay module found
	I1123 09:08:19.317310  416838 out.go:179] * Using the docker driver based on existing profile
	I1123 09:08:19.318384  416838 start.go:309] selected driver: docker
	I1123 09:08:19.318405  416838 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:19.318479  416838 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:08:19.318935  416838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:19.382679  416838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:08:19.371806831 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:19.383144  416838 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:08:19.383235  416838 cni.go:84] Creating CNI manager for ""
	I1123 09:08:19.383308  416838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:19.383382  416838 start.go:353] cluster config:
	{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:19.385418  416838 out.go:179] * Starting "default-k8s-diff-port-602386" primary control-plane node in "default-k8s-diff-port-602386" cluster
	I1123 09:08:19.386625  416838 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:08:19.387845  416838 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:08:19.388905  416838 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:19.388955  416838 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:08:19.388981  416838 cache.go:65] Caching tarball of preloaded images
	I1123 09:08:19.389030  416838 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:08:19.389087  416838 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:08:19.389104  416838 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:08:19.389230  416838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json ...
	I1123 09:08:19.412109  416838 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:08:19.412136  416838 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:08:19.412156  416838 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:08:19.412202  416838 start.go:360] acquireMachinesLock for default-k8s-diff-port-602386: {Name:mk936d882fdf1c8707634b4555fdb3d8130ce5fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:08:19.412273  416838 start.go:364] duration metric: took 46.298µs to acquireMachinesLock for "default-k8s-diff-port-602386"
	I1123 09:08:19.412295  416838 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:08:19.412304  416838 fix.go:54] fixHost starting: 
	I1123 09:08:19.412592  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:19.431148  416838 fix.go:112] recreateIfNeeded on default-k8s-diff-port-602386: state=Stopped err=<nil>
	W1123 09:08:19.431179  416838 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:08:18.690385  415250 cli_runner.go:164] Run: docker network inspect embed-certs-529341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:18.708783  415250 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 09:08:18.713056  415250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:18.723419  415250 kubeadm.go:884] updating cluster {Name:embed-certs-529341 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:08:18.723556  415250 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:18.723620  415250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:18.755473  415250 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:18.755495  415250 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:08:18.755541  415250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:18.783438  415250 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:18.783464  415250 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:08:18.783474  415250 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 09:08:18.783627  415250 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-529341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:08:18.783724  415250 ssh_runner.go:195] Run: crio config
	I1123 09:08:18.831499  415250 cni.go:84] Creating CNI manager for ""
	I1123 09:08:18.831521  415250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:18.831544  415250 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:08:18.831580  415250 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-529341 NodeName:embed-certs-529341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:08:18.831739  415250 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-529341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:08:18.831814  415250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:08:18.841479  415250 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:08:18.841543  415250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:08:18.853942  415250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 09:08:18.869083  415250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:08:18.898425  415250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 09:08:18.911340  415250 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:08:18.915384  415250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:18.925993  415250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:19.016337  415250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:19.039545  415250 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341 for IP: 192.168.103.2
	I1123 09:08:19.039568  415250 certs.go:195] generating shared ca certs ...
	I1123 09:08:19.039591  415250 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.039753  415250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:08:19.039805  415250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:08:19.039820  415250 certs.go:257] generating profile certs ...
	I1123 09:08:19.039928  415250 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.key
	I1123 09:08:19.040028  415250 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key.ad13d260
	I1123 09:08:19.040078  415250 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key
	I1123 09:08:19.040220  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:08:19.040263  415250 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:08:19.040278  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:08:19.040314  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:08:19.040346  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:08:19.040382  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:08:19.040438  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:19.041169  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:08:19.062372  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:08:19.082033  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:08:19.103400  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:08:19.126656  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 09:08:19.150062  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:08:19.167767  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:08:19.186013  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:08:19.204328  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:08:19.222362  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:08:19.240694  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:08:19.260448  415250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:08:19.277312  415250 ssh_runner.go:195] Run: openssl version
	I1123 09:08:19.284592  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:08:19.296432  415250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:08:19.301231  415250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:08:19.301292  415250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:08:19.353787  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:08:19.366093  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:08:19.376400  415250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:08:19.380668  415250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:08:19.380727  415250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:08:19.421725  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:08:19.430770  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:08:19.439845  415250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:19.444203  415250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:19.444257  415250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:19.488539  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:08:19.498431  415250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:08:19.502396  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:08:19.538351  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:08:19.601313  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:08:19.659920  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:08:19.717953  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:08:19.770953  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:08:19.832214  415250 kubeadm.go:401] StartCluster: {Name:embed-certs-529341 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:19.832322  415250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:08:19.832375  415250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:08:19.875848  415250 cri.go:89] found id: "73227818d4fc9086a936e1b1251ac49dc9f565e9664d34c892e0e5e5c62a8920"
	I1123 09:08:19.875872  415250 cri.go:89] found id: "e146e17fa358a72d868c4916214f772a64934dfcef476610c2ec35b50a15e5a8"
	I1123 09:08:19.875879  415250 cri.go:89] found id: "9203249d1159b35eb2d2457002eb5a7611462190dc85089a0e28c7fd11b1257a"
	I1123 09:08:19.875884  415250 cri.go:89] found id: "51c0b9d62ee3b397d97f51cf65c1c8166419f7ce47ad5cd1f86257c9ff8d2429"
	I1123 09:08:19.875889  415250 cri.go:89] found id: ""
	I1123 09:08:19.875935  415250 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:08:19.891594  415250 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:19Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:19.891686  415250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:08:19.903047  415250 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:08:19.903066  415250 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:08:19.903187  415250 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:08:19.912235  415250 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:08:19.913082  415250 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-529341" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:19.913651  415250 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-529341" cluster setting kubeconfig missing "embed-certs-529341" context setting]
	I1123 09:08:19.914419  415250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.916341  415250 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:08:19.926302  415250 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 09:08:19.926424  415250 kubeadm.go:602] duration metric: took 23.347502ms to restartPrimaryControlPlane
	I1123 09:08:19.926446  415250 kubeadm.go:403] duration metric: took 94.240757ms to StartCluster
	I1123 09:08:19.926465  415250 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.926545  415250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:19.928357  415250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.928564  415250 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:08:19.928711  415250 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:08:19.928812  415250 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:19.928824  415250 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-529341"
	I1123 09:08:19.928846  415250 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-529341"
	I1123 09:08:19.928847  415250 addons.go:70] Setting dashboard=true in profile "embed-certs-529341"
	I1123 09:08:19.928855  415250 addons.go:70] Setting default-storageclass=true in profile "embed-certs-529341"
	W1123 09:08:19.928865  415250 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:08:19.928867  415250 addons.go:239] Setting addon dashboard=true in "embed-certs-529341"
	I1123 09:08:19.928867  415250 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-529341"
	W1123 09:08:19.928876  415250 addons.go:248] addon dashboard should already be in state true
	I1123 09:08:19.928898  415250 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:08:19.928902  415250 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:08:19.929206  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:19.929382  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:19.929388  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:19.930948  415250 out.go:179] * Verifying Kubernetes components...
	I1123 09:08:19.932191  415250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:19.960535  415250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:08:19.960612  415250 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:08:19.961728  415250 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:19.961755  415250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:08:19.961816  415250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:08:19.963025  415250 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 09:08:17.979959  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	W1123 09:08:19.990166  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	I1123 09:08:19.963555  415250 addons.go:239] Setting addon default-storageclass=true in "embed-certs-529341"
	W1123 09:08:19.963715  415250 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:08:19.963816  415250 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:08:19.964132  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:08:19.964156  415250 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:08:19.964209  415250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:08:19.965307  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:20.002407  415250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:08:20.015436  415250 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:20.015464  415250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:08:20.015613  415250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:08:20.024798  415250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:08:20.044207  415250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:08:20.118373  415250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:20.133864  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:08:20.133891  415250 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:08:20.136388  415250 node_ready.go:35] waiting up to 6m0s for node "embed-certs-529341" to be "Ready" ...
	I1123 09:08:20.150959  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:08:20.151003  415250 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:08:20.163626  415250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:20.172674  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:08:20.172701  415250 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:08:20.196956  415250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:20.216308  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:08:20.216336  415250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:08:20.238068  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:08:20.238107  415250 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:08:20.264028  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:08:20.264058  415250 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:08:20.281198  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:08:20.281238  415250 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:08:20.298982  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:08:20.299007  415250 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:08:20.312657  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:20.312682  415250 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:08:20.326655  415250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:22.035674  415250 node_ready.go:49] node "embed-certs-529341" is "Ready"
	I1123 09:08:22.035709  415250 node_ready.go:38] duration metric: took 1.899291125s for node "embed-certs-529341" to be "Ready" ...
	I1123 09:08:22.035724  415250 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:08:22.035796  415250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:08:22.561570  415250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.364540583s)
	I1123 09:08:22.561561  415250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.397893343s)
	I1123 09:08:22.561673  415250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.234976578s)
	I1123 09:08:22.561742  415250 api_server.go:72] duration metric: took 2.633148596s to wait for apiserver process to appear ...
	I1123 09:08:22.561800  415250 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:08:22.561822  415250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:08:22.563349  415250 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-529341 addons enable metrics-server
	
	I1123 09:08:22.569021  415250 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:08:22.569046  415250 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:08:22.574664  415250 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:08:22.575641  415250 addons.go:530] duration metric: took 2.646957813s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 09:08:19.436092  416838 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-602386" ...
	I1123 09:08:19.436180  416838 cli_runner.go:164] Run: docker start default-k8s-diff-port-602386
	I1123 09:08:19.800399  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:19.825858  416838 kic.go:430] container "default-k8s-diff-port-602386" state is running.
	I1123 09:08:19.826489  416838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:08:19.855627  416838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json ...
	I1123 09:08:19.855907  416838 machine.go:94] provisionDockerMachine start ...
	I1123 09:08:19.856005  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:19.880656  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:19.881071  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:19.881091  416838 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:08:19.881914  416838 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46272->127.0.0.1:33123: read: connection reset by peer
	I1123 09:08:23.030417  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-602386
	
	I1123 09:08:23.030453  416838 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-602386"
	I1123 09:08:23.030529  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.053328  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:23.053642  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:23.053665  416838 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-602386 && echo "default-k8s-diff-port-602386" | sudo tee /etc/hostname
	I1123 09:08:23.220308  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-602386
	
	I1123 09:08:23.220403  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.240779  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:23.241034  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:23.241054  416838 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-602386' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-602386/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-602386' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:08:23.387642  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:08:23.387682  416838 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:08:23.387708  416838 ubuntu.go:190] setting up certificates
	I1123 09:08:23.387726  416838 provision.go:84] configureAuth start
	I1123 09:08:23.387780  416838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:08:23.406855  416838 provision.go:143] copyHostCerts
	I1123 09:08:23.406915  416838 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:08:23.406933  416838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:08:23.407026  416838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:08:23.407138  416838 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:08:23.407148  416838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:08:23.407176  416838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:08:23.407232  416838 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:08:23.407239  416838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:08:23.407261  416838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:08:23.407314  416838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-602386 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-602386 localhost minikube]
	I1123 09:08:23.459022  416838 provision.go:177] copyRemoteCerts
	I1123 09:08:23.459084  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:08:23.459126  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.477434  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:23.581514  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:08:23.600153  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:08:23.618496  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:08:23.636046  416838 provision.go:87] duration metric: took 248.305271ms to configureAuth
	I1123 09:08:23.636088  416838 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:08:23.636283  416838 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:23.636385  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.654572  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:23.654811  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:23.654832  416838 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:08:23.984145  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:08:23.984172  416838 machine.go:97] duration metric: took 4.12825365s to provisionDockerMachine
	I1123 09:08:23.984187  416838 start.go:293] postStartSetup for "default-k8s-diff-port-602386" (driver="docker")
	I1123 09:08:23.984200  416838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:08:23.984274  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:08:23.984329  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.003375  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.114002  416838 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:08:24.118180  416838 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:08:24.118211  416838 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:08:24.118224  416838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:08:24.118326  416838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:08:24.118419  416838 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:08:24.118523  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:08:24.128435  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:24.149446  416838 start.go:296] duration metric: took 165.240917ms for postStartSetup
	I1123 09:08:24.149541  416838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:08:24.149581  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.168072  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.270207  416838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:08:24.275101  416838 fix.go:56] duration metric: took 4.862787724s for fixHost
	I1123 09:08:24.275128  416838 start.go:83] releasing machines lock for "default-k8s-diff-port-602386", held for 4.862841676s
	I1123 09:08:24.275205  416838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:08:24.293372  416838 ssh_runner.go:195] Run: cat /version.json
	I1123 09:08:24.293431  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.293446  416838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:08:24.293515  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.311729  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.313129  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.410146  416838 ssh_runner.go:195] Run: systemctl --version
	I1123 09:08:24.467761  416838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:08:24.503826  416838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:08:24.508511  416838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:08:24.508579  416838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:08:24.516751  416838 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:08:24.516774  416838 start.go:496] detecting cgroup driver to use...
	I1123 09:08:24.516810  416838 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:08:24.516852  416838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:08:24.531696  416838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:08:24.544238  416838 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:08:24.544282  416838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:08:24.558346  416838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:08:24.571780  416838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:08:24.656177  416838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:08:24.740759  416838 docker.go:234] disabling docker service ...
	I1123 09:08:24.740833  416838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:08:24.756433  416838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:08:24.770492  416838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:08:24.851140  416838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:08:24.935374  416838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:08:24.949092  416838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:08:24.963744  416838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:08:24.963813  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:24.973107  416838 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:08:24.973177  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:24.984634  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:24.994166  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.003374  416838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:08:25.012468  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.021656  416838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.030254  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.039295  416838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:08:25.047048  416838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:08:25.054577  416838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:25.140476  416838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:08:25.287412  416838 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:08:25.287495  416838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:08:25.291665  416838 start.go:564] Will wait 60s for crictl version
	I1123 09:08:25.291717  416838 ssh_runner.go:195] Run: which crictl
	I1123 09:08:25.295719  416838 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:08:25.328656  416838 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:08:25.328766  416838 ssh_runner.go:195] Run: crio --version
	I1123 09:08:25.363554  416838 ssh_runner.go:195] Run: crio --version
	I1123 09:08:25.395471  416838 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:08:25.396716  416838 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-602386 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:25.414755  416838 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 09:08:25.418979  416838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:25.429445  416838 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:08:25.429551  416838 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:25.429602  416838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:25.459534  416838 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:25.459557  416838 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:08:25.459609  416838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:25.488196  416838 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:25.488219  416838 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:08:25.488229  416838 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1123 09:08:25.488358  416838 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-602386 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:08:25.488444  416838 ssh_runner.go:195] Run: crio config
	I1123 09:08:25.534326  416838 cni.go:84] Creating CNI manager for ""
	I1123 09:08:25.534344  416838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:25.534361  416838 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:08:25.534383  416838 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-602386 NodeName:default-k8s-diff-port-602386 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:08:25.534496  416838 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-602386"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:08:25.534554  416838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:08:25.542885  416838 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:08:25.542944  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:08:25.550876  416838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 09:08:25.564513  416838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:08:25.577180  416838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1123 09:08:25.589712  416838 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:08:25.593583  416838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:25.604196  416838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:25.690276  416838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:25.715528  416838 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386 for IP: 192.168.94.2
	I1123 09:08:25.715549  416838 certs.go:195] generating shared ca certs ...
	I1123 09:08:25.715568  416838 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:25.715732  416838 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:08:25.715779  416838 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:08:25.715789  416838 certs.go:257] generating profile certs ...
	I1123 09:08:25.715870  416838 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.key
	I1123 09:08:25.715929  416838 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key.0582d586
	I1123 09:08:25.715998  416838 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key
	I1123 09:08:25.716111  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:08:25.716145  416838 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:08:25.716155  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:08:25.716181  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:08:25.716205  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:08:25.716228  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:08:25.716267  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:25.716771  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:08:25.736220  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:08:25.755725  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:08:25.776235  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:08:25.799178  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:08:25.821636  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:08:25.848034  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:08:25.869417  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:08:25.887199  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:08:25.904702  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:08:25.923174  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:08:25.940397  416838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:08:25.952419  416838 ssh_runner.go:195] Run: openssl version
	I1123 09:08:25.958180  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:08:25.967552  416838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:25.971450  416838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:25.971510  416838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:26.009313  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:08:26.019552  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:08:26.028412  416838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:08:26.032165  416838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:08:26.032218  416838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:08:26.068981  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:08:26.077474  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:08:26.086084  416838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:08:26.090076  416838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:08:26.090130  416838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:08:26.126340  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:08:26.135135  416838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:08:26.139603  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:08:26.183677  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:08:26.220937  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:08:26.271785  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:08:26.318696  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:08:26.379771  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:08:26.427436  416838 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:26.427570  416838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:08:26.427647  416838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:08:26.464877  416838 cri.go:89] found id: "59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2"
	I1123 09:08:26.464901  416838 cri.go:89] found id: "1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613"
	I1123 09:08:26.464908  416838 cri.go:89] found id: "cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98"
	I1123 09:08:26.464912  416838 cri.go:89] found id: "88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64"
	I1123 09:08:26.464917  416838 cri.go:89] found id: ""
	I1123 09:08:26.465005  416838 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:08:26.480693  416838 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:26Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:26.480757  416838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:08:26.489588  416838 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:08:26.489607  416838 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:08:26.489657  416838 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:08:26.499238  416838 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:08:26.500705  416838 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-602386" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:26.501824  416838 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-602386" cluster setting kubeconfig missing "default-k8s-diff-port-602386" context setting]
	I1123 09:08:26.503256  416838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:26.505808  416838 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:08:26.514307  416838 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 09:08:26.514338  416838 kubeadm.go:602] duration metric: took 24.725225ms to restartPrimaryControlPlane
	I1123 09:08:26.514347  416838 kubeadm.go:403] duration metric: took 86.921144ms to StartCluster
	I1123 09:08:26.514364  416838 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:26.514429  416838 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:26.516861  416838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:26.517152  416838 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:08:26.517225  416838 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:08:26.517332  416838 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-602386"
	I1123 09:08:26.517354  416838 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-602386"
	W1123 09:08:26.517363  416838 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:08:26.517382  416838 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-602386"
	I1123 09:08:26.517403  416838 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-602386"
	I1123 09:08:26.517423  416838 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-602386"
	W1123 09:08:26.517434  416838 addons.go:248] addon dashboard should already be in state true
	I1123 09:08:26.517428  416838 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-602386"
	I1123 09:08:26.517468  416838 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:08:26.517394  416838 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:08:26.517637  416838 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:26.517780  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.518002  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.518186  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.519356  416838 out.go:179] * Verifying Kubernetes components...
	I1123 09:08:26.520536  416838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:26.547074  416838 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-602386"
	W1123 09:08:26.547153  416838 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:08:26.547183  416838 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:08:26.547839  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.548893  416838 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:08:26.549859  416838 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:08:26.551058  416838 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:26.551080  416838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:08:26.551136  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:26.551290  416838 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 09:08:22.480154  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	W1123 09:08:24.978718  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	W1123 09:08:26.979852  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	I1123 09:08:23.062550  415250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:08:23.067308  415250 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:08:23.067334  415250 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:08:23.561959  415250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:08:23.566196  415250 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:08:23.567141  415250 api_server.go:141] control plane version: v1.34.1
	I1123 09:08:23.567167  415250 api_server.go:131] duration metric: took 1.005360807s to wait for apiserver health ...
	I1123 09:08:23.567176  415250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:08:23.570684  415250 system_pods.go:59] 8 kube-system pods found
	I1123 09:08:23.570712  415250 system_pods.go:61] "coredns-66bc5c9577-k4bmj" [0676d3db-d11b-433f-9c17-6131468d109d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:08:23.570720  415250 system_pods.go:61] "etcd-embed-certs-529341" [3a0211ec-d796-4eec-82d3-6599cb786897] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:08:23.570726  415250 system_pods.go:61] "kindnet-twlcq" [45682d16-1f1e-4733-8a6b-31cf7cdfa5bd] Running
	I1123 09:08:23.570733  415250 system_pods.go:61] "kube-apiserver-embed-certs-529341" [51301aaf-4d05-41b4-b9c6-8ba22416a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:08:23.570739  415250 system_pods.go:61] "kube-controller-manager-embed-certs-529341" [7538c458-808e-4018-b566-af01d924edee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:08:23.570746  415250 system_pods.go:61] "kube-proxy-xfwhk" [86a6640d-80fe-45a3-b48b-d2577d222ccf] Running
	I1123 09:08:23.570751  415250 system_pods.go:61] "kube-scheduler-embed-certs-529341" [8d0a8add-2bc8-4811-a1ac-a6c8d6d8273e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:08:23.570757  415250 system_pods.go:61] "storage-provisioner" [c60e7298-2b0f-49f5-afde-b97e4bc8287d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:08:23.570764  415250 system_pods.go:74] duration metric: took 3.583909ms to wait for pod list to return data ...
	I1123 09:08:23.570772  415250 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:08:23.573041  415250 default_sa.go:45] found service account: "default"
	I1123 09:08:23.573059  415250 default_sa.go:55] duration metric: took 2.281538ms for default service account to be created ...
	I1123 09:08:23.573067  415250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:08:23.576026  415250 system_pods.go:86] 8 kube-system pods found
	I1123 09:08:23.576057  415250 system_pods.go:89] "coredns-66bc5c9577-k4bmj" [0676d3db-d11b-433f-9c17-6131468d109d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:08:23.576071  415250 system_pods.go:89] "etcd-embed-certs-529341" [3a0211ec-d796-4eec-82d3-6599cb786897] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:08:23.576084  415250 system_pods.go:89] "kindnet-twlcq" [45682d16-1f1e-4733-8a6b-31cf7cdfa5bd] Running
	I1123 09:08:23.576094  415250 system_pods.go:89] "kube-apiserver-embed-certs-529341" [51301aaf-4d05-41b4-b9c6-8ba22416a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:08:23.576107  415250 system_pods.go:89] "kube-controller-manager-embed-certs-529341" [7538c458-808e-4018-b566-af01d924edee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:08:23.576114  415250 system_pods.go:89] "kube-proxy-xfwhk" [86a6640d-80fe-45a3-b48b-d2577d222ccf] Running
	I1123 09:08:23.576123  415250 system_pods.go:89] "kube-scheduler-embed-certs-529341" [8d0a8add-2bc8-4811-a1ac-a6c8d6d8273e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:08:23.576131  415250 system_pods.go:89] "storage-provisioner" [c60e7298-2b0f-49f5-afde-b97e4bc8287d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:08:23.576141  415250 system_pods.go:126] duration metric: took 3.068556ms to wait for k8s-apps to be running ...
	I1123 09:08:23.576155  415250 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:08:23.576207  415250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:23.589656  415250 system_svc.go:56] duration metric: took 13.492988ms WaitForService to wait for kubelet
	I1123 09:08:23.589687  415250 kubeadm.go:587] duration metric: took 3.661095272s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:08:23.589706  415250 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:08:23.592613  415250 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:08:23.592642  415250 node_conditions.go:123] node cpu capacity is 8
	I1123 09:08:23.592661  415250 node_conditions.go:105] duration metric: took 2.94425ms to run NodePressure ...
	I1123 09:08:23.592676  415250 start.go:242] waiting for startup goroutines ...
	I1123 09:08:23.592689  415250 start.go:247] waiting for cluster config update ...
	I1123 09:08:23.592708  415250 start.go:256] writing updated cluster config ...
	I1123 09:08:23.593078  415250 ssh_runner.go:195] Run: rm -f paused
	I1123 09:08:23.596792  415250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:08:23.600652  415250 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k4bmj" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:08:25.606773  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:27.610923  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:26.555651  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:08:26.555684  416838 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:08:26.555750  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:26.584503  416838 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:26.584535  416838 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:08:26.584824  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:26.589930  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:26.590627  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:26.615366  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:26.684290  416838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:26.701472  416838 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-602386" to be "Ready" ...
	I1123 09:08:26.716057  416838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:26.718579  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:08:26.718616  416838 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:08:26.737734  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:08:26.737751  416838 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:08:26.750096  416838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:26.758250  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:08:26.758613  416838 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:08:26.782936  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:08:26.782964  416838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:08:26.811228  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:08:26.811260  416838 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:08:26.841566  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:08:26.841592  416838 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:08:26.857272  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:08:26.857295  416838 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:08:26.871688  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:08:26.871709  416838 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:08:26.886574  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:26.886600  416838 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:08:26.904626  416838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:28.103222  416838 node_ready.go:49] node "default-k8s-diff-port-602386" is "Ready"
	I1123 09:08:28.103257  416838 node_ready.go:38] duration metric: took 1.401750447s for node "default-k8s-diff-port-602386" to be "Ready" ...
	I1123 09:08:28.103273  416838 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:08:28.103334  416838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:08:28.908407  416838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.158065804s)
	I1123 09:08:28.908505  416838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.19241101s)
	I1123 09:08:28.908675  416838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.004006685s)
	I1123 09:08:28.908751  416838 api_server.go:72] duration metric: took 2.391563771s to wait for apiserver process to appear ...
	I1123 09:08:28.908816  416838 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:08:28.908854  416838 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1123 09:08:28.910400  416838 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-602386 addons enable metrics-server
	
	I1123 09:08:28.916503  416838 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:08:28.916668  416838 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:08:28.926483  416838 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	
	
	==> CRI-O <==
	Nov 23 09:08:00 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:00.59068935Z" level=info msg="Created container eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=565e9a53-6de1-4397-b8b7-9d0a50ed5e86 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:00 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:00.591576377Z" level=info msg="Starting container: eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4" id=29ccfd69-f4fd-404e-9554-40e5050bd1b5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:00 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:00.593443161Z" level=info msg="Started container" PID=1752 containerID=eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper id=29ccfd69-f4fd-404e-9554-40e5050bd1b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4c2b6e0482b4f41e32dc3d7c1716091dfcbd2816e26c4196e396a00c3918b5e
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.039564163Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=55feba0a-35d1-48e6-a4f6-ad35e70dcc70 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.042549561Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5f8085a0-2bdb-4e38-bade-cded56165ba8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.04577461Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=dc6f11a9-2301-4e10-9bba-fdd01cebd989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.045929164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.055138745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.055636294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.087055255Z" level=info msg="Created container 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=dc6f11a9-2301-4e10-9bba-fdd01cebd989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.087677104Z" level=info msg="Starting container: 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811" id=a86c883f-3d79-4a48-96d1-947daf12b50d name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.089999987Z" level=info msg="Started container" PID=1763 containerID=934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper id=a86c883f-3d79-4a48-96d1-947daf12b50d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4c2b6e0482b4f41e32dc3d7c1716091dfcbd2816e26c4196e396a00c3918b5e
	Nov 23 09:08:02 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:02.044546049Z" level=info msg="Removing container: eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4" id=65901194-9fe2-4899-897d-02df1391a9eb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:02 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:02.05573717Z" level=info msg="Removed container eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=65901194-9fe2-4899-897d-02df1391a9eb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.961293844Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=de0956d0-87da-491e-888c-07053fffc53e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.962301452Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0a4b3181-f0ac-49e2-8a34-5cfd2418f1ea name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.963386968Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=738da8b9-2403-4130-9749-0762359a377f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.963530618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.970033328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.970529198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.99903147Z" level=info msg="Created container f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=738da8b9-2403-4130-9749-0762359a377f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.999530887Z" level=info msg="Starting container: f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c" id=0d234516-d4c6-451e-9041-a45e9bc8f09a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:18 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:18.001618074Z" level=info msg="Started container" PID=1797 containerID=f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper id=0d234516-d4c6-451e-9041-a45e9bc8f09a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4c2b6e0482b4f41e32dc3d7c1716091dfcbd2816e26c4196e396a00c3918b5e
	Nov 23 09:08:18 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:18.087727548Z" level=info msg="Removing container: 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811" id=a14f0573-f338-4877-928f-a8c5eadb5a61 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:18 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:18.097280137Z" level=info msg="Removed container 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=a14f0573-f338-4877-928f-a8c5eadb5a61 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f0510ef795a2e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   f4c2b6e0482b4       dashboard-metrics-scraper-5f989dc9cf-262tc       kubernetes-dashboard
	b7902f0397bf0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   566c1fbcb017a       kubernetes-dashboard-8694d4445c-smgkc            kubernetes-dashboard
	a90702afed2b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Running             storage-provisioner         1                   ee93b9758f5ff       storage-provisioner                              kube-system
	629c8538dd18c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           48 seconds ago      Running             coredns                     0                   c542cd613e7cc       coredns-5dd5756b68-whp8m                         kube-system
	9c7464426ad54       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   7918e2f9585e5       busybox                                          default
	f32d9a2f7dcfa       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           48 seconds ago      Running             kube-proxy                  0                   f21c1d3e84905       kube-proxy-9crnb                                 kube-system
	3a5035af2c25e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   ee93b9758f5ff       storage-provisioner                              kube-system
	5e74dbebbc2f0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   1557ce1639fb0       kindnet-fhw8w                                    kube-system
	67da9dae46c0f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           51 seconds ago      Running             kube-apiserver              0                   14b2fc7685cf4       kube-apiserver-old-k8s-version-054094            kube-system
	c7dc1d98ec4da       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           51 seconds ago      Running             kube-controller-manager     0                   7ebe1c59fde3e       kube-controller-manager-old-k8s-version-054094   kube-system
	67cdf9a216a06       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           51 seconds ago      Running             kube-scheduler              0                   2f54d0e1541f9       kube-scheduler-old-k8s-version-054094            kube-system
	f308bae766722       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           51 seconds ago      Running             etcd                        0                   c225f0b83cf95       etcd-old-k8s-version-054094                      kube-system
	
	
	==> coredns [629c8538dd18c46925238739061bb0f44ca62dc0ae653a849f5a698e44652b68] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52083 - 45688 "HINFO IN 8643529580468224209.8226291599787353716. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.127089334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-054094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-054094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-054094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_06_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:06:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-054094
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:08:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-054094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                e0f1f612-a814-499c-889a-0902ab6fee2d
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-5dd5756b68-whp8m                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-old-k8s-version-054094                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-fhw8w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-old-k8s-version-054094             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-old-k8s-version-054094    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-9crnb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-old-k8s-version-054094             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-262tc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-smgkc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 48s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x8 over 2m4s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node old-k8s-version-054094 event: Registered Node old-k8s-version-054094 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-054094 status is now: NodeReady
	  Normal  Starting                 53s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 53s)    kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 53s)    kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 53s)    kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           37s                  node-controller  Node old-k8s-version-054094 event: Registered Node old-k8s-version-054094 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [f308bae766722fb5efa2c7d1616cb7025893f5d7f71c748c3370f5085550daeb] <==
	{"level":"info","ts":"2025-11-23T09:07:38.557276Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T09:07:38.557577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T09:07:38.55839Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T09:07:38.558526Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:07:38.558565Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:07:38.559994Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T09:07:38.560127Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T09:07:38.560166Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T09:07:38.560247Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T09:07:38.560276Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T09:07:39.847907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T09:07:39.847961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T09:07:39.848002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T09:07:39.848017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.848026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.848038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.848068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.850353Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-054094 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T09:07:39.850357Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T09:07:39.850378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T09:07:39.850636Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T09:07:39.850678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T09:07:39.851471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T09:07:39.851542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T09:08:12.155448Z","caller":"traceutil/trace.go:171","msg":"trace[1768020708] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"126.759793ms","start":"2025-11-23T09:08:12.02865Z","end":"2025-11-23T09:08:12.15541Z","steps":["trace[1768020708] 'process raft request'  (duration: 56.026699ms)","trace[1768020708] 'compare'  (duration: 70.593972ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:08:30 up  1:50,  0 user,  load average: 5.89, 4.45, 2.82
	Linux old-k8s-version-054094 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e74dbebbc2f09c92cc8f26c86f4a178da062712bef7fb2aa891abaf9d0ef753] <==
	I1123 09:07:41.609620       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:07:41.609901       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:07:41.610135       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:07:41.610156       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:07:41.610185       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:07:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:07:41.812878       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:07:41.831991       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:07:41.832125       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:07:41.832733       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:07:42.133146       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:07:42.133188       1 metrics.go:72] Registering metrics
	I1123 09:07:42.133272       1 controller.go:711] "Syncing nftables rules"
	I1123 09:07:51.813170       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:07:51.813228       1 main.go:301] handling current node
	I1123 09:08:01.813668       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:08:01.813737       1 main.go:301] handling current node
	I1123 09:08:11.820029       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:08:11.820063       1 main.go:301] handling current node
	I1123 09:08:21.815054       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:08:21.815112       1 main.go:301] handling current node
	
	
	==> kube-apiserver [67da9dae46c0f7bf57f9dc994797c8788e6a957999afabdd876c802e5872cb68] <==
	I1123 09:07:40.849274       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:07:40.849518       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 09:07:40.849672       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 09:07:40.849696       1 aggregator.go:166] initial CRD sync complete...
	I1123 09:07:40.849703       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 09:07:40.849709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:07:40.849715       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:07:40.850158       1 shared_informer.go:318] Caches are synced for configmaps
	E1123 09:07:40.850322       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1123 09:07:40.897469       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 09:07:41.748399       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 09:07:41.752625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:07:41.780777       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 09:07:41.801128       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:07:41.810087       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:07:41.818582       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 09:07:41.855317       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.138.60"}
	I1123 09:07:41.868717       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.50.113"}
	E1123 09:07:50.849747       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1123 09:07:53.317312       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:07:53.319804       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 09:07:53.415359       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1123 09:08:00.851052       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1123 09:08:10.851489       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1123 09:08:20.852197       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [c7dc1d98ec4da99d3a0764984d5923c598972517dad05844b7805b9388bb5cc9] <==
	I1123 09:07:53.453576       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-smgkc"
	I1123 09:07:53.454078       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-262tc"
	I1123 09:07:53.464090       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1123 09:07:53.468083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="40.231975ms"
	I1123 09:07:53.472760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.632289ms"
	I1123 09:07:53.480761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.874826ms"
	I1123 09:07:53.481643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.246µs"
	I1123 09:07:53.485745       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="100.929µs"
	I1123 09:07:53.492085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.20144ms"
	I1123 09:07:53.492294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.205µs"
	I1123 09:07:53.497645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.15µs"
	I1123 09:07:53.499125       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 09:07:53.514855       1 shared_informer.go:318] Caches are synced for disruption
	I1123 09:07:53.857266       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:07:53.925851       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:07:53.925885       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 09:07:59.125193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.675474ms"
	I1123 09:07:59.125354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.399µs"
	I1123 09:08:01.054336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.05µs"
	I1123 09:08:02.057730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="128.84µs"
	I1123 09:08:03.059539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.67µs"
	I1123 09:08:12.246908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.850207ms"
	I1123 09:08:12.247054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.987µs"
	I1123 09:08:18.096954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.043µs"
	I1123 09:08:23.786825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.818µs"
	
	
	==> kube-proxy [f32d9a2f7dcfa4d5ba236560662bb95e5ec188a673b28df770ec09f9d9c6aac9] <==
	I1123 09:07:41.428876       1 server_others.go:69] "Using iptables proxy"
	I1123 09:07:41.441378       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 09:07:41.463757       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:41.466437       1 server_others.go:152] "Using iptables Proxier"
	I1123 09:07:41.466475       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 09:07:41.466482       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 09:07:41.466511       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 09:07:41.466720       1 server.go:846] "Version info" version="v1.28.0"
	I1123 09:07:41.466733       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:41.468423       1 config.go:188] "Starting service config controller"
	I1123 09:07:41.468435       1 config.go:97] "Starting endpoint slice config controller"
	I1123 09:07:41.468843       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 09:07:41.468842       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 09:07:41.468622       1 config.go:315] "Starting node config controller"
	I1123 09:07:41.468982       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 09:07:41.570637       1 shared_informer.go:318] Caches are synced for node config
	I1123 09:07:41.570661       1 shared_informer.go:318] Caches are synced for service config
	I1123 09:07:41.570671       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [67cdf9a216a06c548df986856a47cb4952575cfc9b63188445c10205400e34be] <==
	I1123 09:07:38.979386       1 serving.go:348] Generated self-signed cert in-memory
	W1123 09:07:40.789289       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:07:40.789345       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:07:40.789358       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:07:40.789367       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:07:40.826776       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 09:07:40.828245       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:40.830780       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:40.830820       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 09:07:40.832079       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 09:07:40.832340       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 09:07:40.931459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.458991     737 topology_manager.go:215] "Topology Admit Handler" podUID="9aeb7744-7444-4754-a199-8a503b630d8b" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-smgkc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.475689     737 topology_manager.go:215] "Topology Admit Handler" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-262tc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583183     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5765b029-c0f4-4dd5-b495-7744f5cb301b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-262tc\" (UID: \"5765b029-c0f4-4dd5-b495-7744f5cb301b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583245     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9aeb7744-7444-4754-a199-8a503b630d8b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-smgkc\" (UID: \"9aeb7744-7444-4754-a199-8a503b630d8b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-smgkc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583279     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xll5p\" (UniqueName: \"kubernetes.io/projected/9aeb7744-7444-4754-a199-8a503b630d8b-kube-api-access-xll5p\") pod \"kubernetes-dashboard-8694d4445c-smgkc\" (UID: \"9aeb7744-7444-4754-a199-8a503b630d8b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-smgkc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583361     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9r2\" (UniqueName: \"kubernetes.io/projected/5765b029-c0f4-4dd5-b495-7744f5cb301b-kube-api-access-6s9r2\") pod \"dashboard-metrics-scraper-5f989dc9cf-262tc\" (UID: \"5765b029-c0f4-4dd5-b495-7744f5cb301b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc"
	Nov 23 09:08:01 old-k8s-version-054094 kubelet[737]: I1123 09:08:01.038951     737 scope.go:117] "RemoveContainer" containerID="eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4"
	Nov 23 09:08:01 old-k8s-version-054094 kubelet[737]: I1123 09:08:01.054455     737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-smgkc" podStartSLOduration=3.615675785 podCreationTimestamp="2025-11-23 09:07:53 +0000 UTC" firstStartedPulling="2025-11-23 09:07:53.79480745 +0000 UTC m=+15.923410308" lastFinishedPulling="2025-11-23 09:07:58.233521345 +0000 UTC m=+20.362124192" observedRunningTime="2025-11-23 09:07:59.082027409 +0000 UTC m=+21.210630275" watchObservedRunningTime="2025-11-23 09:08:01.054389669 +0000 UTC m=+23.182992537"
	Nov 23 09:08:02 old-k8s-version-054094 kubelet[737]: I1123 09:08:02.043224     737 scope.go:117] "RemoveContainer" containerID="eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4"
	Nov 23 09:08:02 old-k8s-version-054094 kubelet[737]: I1123 09:08:02.043416     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:02 old-k8s-version-054094 kubelet[737]: E1123 09:08:02.043786     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:03 old-k8s-version-054094 kubelet[737]: I1123 09:08:03.047484     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:03 old-k8s-version-054094 kubelet[737]: E1123 09:08:03.047893     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:04 old-k8s-version-054094 kubelet[737]: I1123 09:08:04.049917     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:04 old-k8s-version-054094 kubelet[737]: E1123 09:08:04.050327     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:17 old-k8s-version-054094 kubelet[737]: I1123 09:08:17.960696     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:18 old-k8s-version-054094 kubelet[737]: I1123 09:08:18.086431     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:18 old-k8s-version-054094 kubelet[737]: I1123 09:08:18.086678     737 scope.go:117] "RemoveContainer" containerID="f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c"
	Nov 23 09:08:18 old-k8s-version-054094 kubelet[737]: E1123 09:08:18.087049     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:23 old-k8s-version-054094 kubelet[737]: I1123 09:08:23.777427     737 scope.go:117] "RemoveContainer" containerID="f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c"
	Nov 23 09:08:23 old-k8s-version-054094 kubelet[737]: E1123 09:08:23.777801     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: kubelet.service: Consumed 1.466s CPU time.
	
	
	==> kubernetes-dashboard [b7902f0397bf02fb653af022bdad06aea40eb13c6da9af1435a515c5ad12d0e1] <==
	2025/11/23 09:07:58 Using namespace: kubernetes-dashboard
	2025/11/23 09:07:58 Using in-cluster config to connect to apiserver
	2025/11/23 09:07:58 Using secret token for csrf signing
	2025/11/23 09:07:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:07:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:07:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 09:07:58 Generating JWE encryption key
	2025/11/23 09:07:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:07:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:07:58 Initializing JWE encryption key from synchronized object
	2025/11/23 09:07:58 Creating in-cluster Sidecar client
	2025/11/23 09:07:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:07:58 Serving insecurely on HTTP port: 9090
	2025/11/23 09:08:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:07:58 Starting overwatch
	
	
	==> storage-provisioner [3a5035af2c25e9076b679fe308a44f43a32681ba1653ba021cc6294822caf7f9] <==
	I1123 09:07:41.381987       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:07:41.388611       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a90702afed2b040fbd77498ec11afee60f27d3a30d65069dea6e6961e8118621] <==
	I1123 09:07:42.036854       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:07:42.044276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:07:42.044310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 09:07:59.444577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:07:59.444712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3dec40d-0ff5-42c0-b2b8-e87a7b713465", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-054094_3fb7a2a9-4954-480e-a2ce-90d7562fdeac became leader
	I1123 09:07:59.444721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054094_3fb7a2a9-4954-480e-a2ce-90d7562fdeac!
	I1123 09:07:59.544865       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054094_3fb7a2a9-4954-480e-a2ce-90d7562fdeac!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054094 -n old-k8s-version-054094
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054094 -n old-k8s-version-054094: exit status 2 (390.587618ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-054094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-054094
helpers_test.go:243: (dbg) docker inspect old-k8s-version-054094:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3",
	        "Created": "2025-11-23T09:06:14.055238477Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 407032,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:07:31.536642342Z",
	            "FinishedAt": "2025-11-23T09:07:30.561167024Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/hosts",
	        "LogPath": "/var/lib/docker/containers/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3/6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3-json.log",
	        "Name": "/old-k8s-version-054094",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-054094:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-054094",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6fbb3e1692df1d8bcc2570eb6c4edce5c5187b424f65561f910e65c2a64b97b3",
	                "LowerDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7896100ea5d6d69fd8679aef5e7b10670677a84f077ad468f383d9f86b9a4a33/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-054094",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-054094/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-054094",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-054094",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-054094",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b7c0085e5dce4e6df2695e73598bb0cc19910327212e4ad847442c04a69b893d",
	            "SandboxKey": "/var/run/docker/netns/b7c0085e5dce",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-054094": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "76e5790841e8d84532c8d28d1be8e40ba53fa4abb8a22eef487cc6e2d204979d",
	                    "EndpointID": "4cd1dd5db15b30b14fb0b508620442f32a28adc77e8e5c268eabbc5a1f7ccd04",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "56:3f:31:ff:e4:85",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-054094",
	                        "6fbb3e1692df"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094: exit status 2 (461.35789ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054094 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-054094 logs -n 25: (1.551569264s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-741183 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ ssh     │ -p bridge-741183 sudo crio config                                                                                                                                                                                                             │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p bridge-741183                                                                                                                                                                                                                              │ bridge-741183                │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                                                                                               │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p no-preload-619589 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:08:19
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:08:19.205185  416838 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:08:19.205478  416838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:19.205490  416838 out.go:374] Setting ErrFile to fd 2...
	I1123 09:08:19.205494  416838 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:19.205722  416838 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:08:19.206165  416838 out.go:368] Setting JSON to false
	I1123 09:08:19.207257  416838 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6639,"bootTime":1763882260,"procs":314,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:08:19.207311  416838 start.go:143] virtualization: kvm guest
	I1123 09:08:19.209393  416838 out.go:179] * [default-k8s-diff-port-602386] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:08:19.210651  416838 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:08:19.210665  416838 notify.go:221] Checking for updates...
	I1123 09:08:19.212882  416838 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:08:19.214456  416838 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:19.215612  416838 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:08:19.216796  416838 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:08:19.217884  416838 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:08:19.219361  416838 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:19.220146  416838 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:08:19.250242  416838 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:08:19.250361  416838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:19.315372  416838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:08:19.303997527 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:19.315465  416838 docker.go:319] overlay module found
	I1123 09:08:19.317310  416838 out.go:179] * Using the docker driver based on existing profile
	I1123 09:08:19.318384  416838 start.go:309] selected driver: docker
	I1123 09:08:19.318405  416838 start.go:927] validating driver "docker" against &{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:19.318479  416838 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:08:19.318935  416838 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:19.382679  416838 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:08:19.371806831 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:19.383144  416838 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:08:19.383235  416838 cni.go:84] Creating CNI manager for ""
	I1123 09:08:19.383308  416838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:19.383382  416838 start.go:353] cluster config:
	{Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:19.385418  416838 out.go:179] * Starting "default-k8s-diff-port-602386" primary control-plane node in "default-k8s-diff-port-602386" cluster
	I1123 09:08:19.386625  416838 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:08:19.387845  416838 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:08:19.388905  416838 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:19.388955  416838 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:08:19.388981  416838 cache.go:65] Caching tarball of preloaded images
	I1123 09:08:19.389030  416838 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:08:19.389087  416838 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:08:19.389104  416838 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:08:19.389230  416838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json ...
	I1123 09:08:19.412109  416838 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:08:19.412136  416838 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:08:19.412156  416838 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:08:19.412202  416838 start.go:360] acquireMachinesLock for default-k8s-diff-port-602386: {Name:mk936d882fdf1c8707634b4555fdb3d8130ce5fd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:08:19.412273  416838 start.go:364] duration metric: took 46.298µs to acquireMachinesLock for "default-k8s-diff-port-602386"
	I1123 09:08:19.412295  416838 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:08:19.412304  416838 fix.go:54] fixHost starting: 
	I1123 09:08:19.412592  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:19.431148  416838 fix.go:112] recreateIfNeeded on default-k8s-diff-port-602386: state=Stopped err=<nil>
	W1123 09:08:19.431179  416838 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:08:18.690385  415250 cli_runner.go:164] Run: docker network inspect embed-certs-529341 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:18.708783  415250 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1123 09:08:18.713056  415250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:18.723419  415250 kubeadm.go:884] updating cluster {Name:embed-certs-529341 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:08:18.723556  415250 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:18.723620  415250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:18.755473  415250 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:18.755495  415250 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:08:18.755541  415250 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:18.783438  415250 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:18.783464  415250 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:08:18.783474  415250 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1123 09:08:18.783627  415250 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-529341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:08:18.783724  415250 ssh_runner.go:195] Run: crio config
	I1123 09:08:18.831499  415250 cni.go:84] Creating CNI manager for ""
	I1123 09:08:18.831521  415250 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:18.831544  415250 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:08:18.831580  415250 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-529341 NodeName:embed-certs-529341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:08:18.831739  415250 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-529341"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:08:18.831814  415250 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:08:18.841479  415250 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:08:18.841543  415250 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:08:18.853942  415250 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1123 09:08:18.869083  415250 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:08:18.898425  415250 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1123 09:08:18.911340  415250 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:08:18.915384  415250 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:18.925993  415250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:19.016337  415250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:19.039545  415250 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341 for IP: 192.168.103.2
	I1123 09:08:19.039568  415250 certs.go:195] generating shared ca certs ...
	I1123 09:08:19.039591  415250 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.039753  415250 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:08:19.039805  415250 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:08:19.039820  415250 certs.go:257] generating profile certs ...
	I1123 09:08:19.039928  415250 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/client.key
	I1123 09:08:19.040028  415250 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key.ad13d260
	I1123 09:08:19.040078  415250 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key
	I1123 09:08:19.040220  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:08:19.040263  415250 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:08:19.040278  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:08:19.040314  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:08:19.040346  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:08:19.040382  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:08:19.040438  415250 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:19.041169  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:08:19.062372  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:08:19.082033  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:08:19.103400  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:08:19.126656  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1123 09:08:19.150062  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1123 09:08:19.167767  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:08:19.186013  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/embed-certs-529341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:08:19.204328  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:08:19.222362  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:08:19.240694  415250 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:08:19.260448  415250 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:08:19.277312  415250 ssh_runner.go:195] Run: openssl version
	I1123 09:08:19.284592  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:08:19.296432  415250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:08:19.301231  415250 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:08:19.301292  415250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:08:19.353787  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:08:19.366093  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:08:19.376400  415250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:08:19.380668  415250 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:08:19.380727  415250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:08:19.421725  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:08:19.430770  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:08:19.439845  415250 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:19.444203  415250 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:19.444257  415250 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:19.488539  415250 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:08:19.498431  415250 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:08:19.502396  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:08:19.538351  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:08:19.601313  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:08:19.659920  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:08:19.717953  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:08:19.770953  415250 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:08:19.832214  415250 kubeadm.go:401] StartCluster: {Name:embed-certs-529341 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-529341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:19.832322  415250 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:08:19.832375  415250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:08:19.875848  415250 cri.go:89] found id: "73227818d4fc9086a936e1b1251ac49dc9f565e9664d34c892e0e5e5c62a8920"
	I1123 09:08:19.875872  415250 cri.go:89] found id: "e146e17fa358a72d868c4916214f772a64934dfcef476610c2ec35b50a15e5a8"
	I1123 09:08:19.875879  415250 cri.go:89] found id: "9203249d1159b35eb2d2457002eb5a7611462190dc85089a0e28c7fd11b1257a"
	I1123 09:08:19.875884  415250 cri.go:89] found id: "51c0b9d62ee3b397d97f51cf65c1c8166419f7ce47ad5cd1f86257c9ff8d2429"
	I1123 09:08:19.875889  415250 cri.go:89] found id: ""
	I1123 09:08:19.875935  415250 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:08:19.891594  415250 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:19Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:19.891686  415250 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:08:19.903047  415250 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:08:19.903066  415250 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:08:19.903187  415250 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:08:19.912235  415250 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:08:19.913082  415250 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-529341" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:19.913651  415250 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-529341" cluster setting kubeconfig missing "embed-certs-529341" context setting]
	I1123 09:08:19.914419  415250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.916341  415250 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:08:19.926302  415250 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1123 09:08:19.926424  415250 kubeadm.go:602] duration metric: took 23.347502ms to restartPrimaryControlPlane
	I1123 09:08:19.926446  415250 kubeadm.go:403] duration metric: took 94.240757ms to StartCluster
	I1123 09:08:19.926465  415250 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.926545  415250 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:19.928357  415250 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:19.928564  415250 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:08:19.928711  415250 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:08:19.928812  415250 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:19.928824  415250 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-529341"
	I1123 09:08:19.928846  415250 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-529341"
	I1123 09:08:19.928847  415250 addons.go:70] Setting dashboard=true in profile "embed-certs-529341"
	I1123 09:08:19.928855  415250 addons.go:70] Setting default-storageclass=true in profile "embed-certs-529341"
	W1123 09:08:19.928865  415250 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:08:19.928867  415250 addons.go:239] Setting addon dashboard=true in "embed-certs-529341"
	I1123 09:08:19.928867  415250 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-529341"
	W1123 09:08:19.928876  415250 addons.go:248] addon dashboard should already be in state true
	I1123 09:08:19.928898  415250 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:08:19.928902  415250 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:08:19.929206  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:19.929382  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:19.929388  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:19.930948  415250 out.go:179] * Verifying Kubernetes components...
	I1123 09:08:19.932191  415250 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:19.960535  415250 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:08:19.960612  415250 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:08:19.961728  415250 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:19.961755  415250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:08:19.961816  415250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:08:19.963025  415250 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 09:08:17.979959  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	W1123 09:08:19.990166  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	I1123 09:08:19.963555  415250 addons.go:239] Setting addon default-storageclass=true in "embed-certs-529341"
	W1123 09:08:19.963715  415250 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:08:19.963816  415250 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:08:19.964132  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:08:19.964156  415250 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:08:19.964209  415250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:08:19.965307  415250 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:08:20.002407  415250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:08:20.015436  415250 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:20.015464  415250 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:08:20.015613  415250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:08:20.024798  415250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:08:20.044207  415250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:08:20.118373  415250 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:20.133864  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:08:20.133891  415250 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:08:20.136388  415250 node_ready.go:35] waiting up to 6m0s for node "embed-certs-529341" to be "Ready" ...
	I1123 09:08:20.150959  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:08:20.151003  415250 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:08:20.163626  415250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:20.172674  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:08:20.172701  415250 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:08:20.196956  415250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:20.216308  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:08:20.216336  415250 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:08:20.238068  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:08:20.238107  415250 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:08:20.264028  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:08:20.264058  415250 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:08:20.281198  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:08:20.281238  415250 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:08:20.298982  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:08:20.299007  415250 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:08:20.312657  415250 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:20.312682  415250 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:08:20.326655  415250 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:22.035674  415250 node_ready.go:49] node "embed-certs-529341" is "Ready"
	I1123 09:08:22.035709  415250 node_ready.go:38] duration metric: took 1.899291125s for node "embed-certs-529341" to be "Ready" ...
	I1123 09:08:22.035724  415250 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:08:22.035796  415250 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:08:22.561570  415250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.364540583s)
	I1123 09:08:22.561561  415250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.397893343s)
	I1123 09:08:22.561673  415250 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.234976578s)
	I1123 09:08:22.561742  415250 api_server.go:72] duration metric: took 2.633148596s to wait for apiserver process to appear ...
	I1123 09:08:22.561800  415250 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:08:22.561822  415250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:08:22.563349  415250 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-529341 addons enable metrics-server
	
	I1123 09:08:22.569021  415250 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:08:22.569046  415250 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:08:22.574664  415250 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:08:22.575641  415250 addons.go:530] duration metric: took 2.646957813s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 09:08:19.436092  416838 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-602386" ...
	I1123 09:08:19.436180  416838 cli_runner.go:164] Run: docker start default-k8s-diff-port-602386
	I1123 09:08:19.800399  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:19.825858  416838 kic.go:430] container "default-k8s-diff-port-602386" state is running.
	I1123 09:08:19.826489  416838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:08:19.855627  416838 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/config.json ...
	I1123 09:08:19.855907  416838 machine.go:94] provisionDockerMachine start ...
	I1123 09:08:19.856005  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:19.880656  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:19.881071  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:19.881091  416838 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:08:19.881914  416838 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46272->127.0.0.1:33123: read: connection reset by peer
	I1123 09:08:23.030417  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-602386
	
	I1123 09:08:23.030453  416838 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-602386"
	I1123 09:08:23.030529  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.053328  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:23.053642  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:23.053665  416838 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-602386 && echo "default-k8s-diff-port-602386" | sudo tee /etc/hostname
	I1123 09:08:23.220308  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-602386
	
	I1123 09:08:23.220403  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.240779  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:23.241034  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:23.241054  416838 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-602386' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-602386/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-602386' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:08:23.387642  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:08:23.387682  416838 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:08:23.387708  416838 ubuntu.go:190] setting up certificates
	I1123 09:08:23.387726  416838 provision.go:84] configureAuth start
	I1123 09:08:23.387780  416838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:08:23.406855  416838 provision.go:143] copyHostCerts
	I1123 09:08:23.406915  416838 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:08:23.406933  416838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:08:23.407026  416838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:08:23.407138  416838 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:08:23.407148  416838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:08:23.407176  416838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:08:23.407232  416838 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:08:23.407239  416838 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:08:23.407261  416838 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:08:23.407314  416838 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-602386 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-602386 localhost minikube]
	I1123 09:08:23.459022  416838 provision.go:177] copyRemoteCerts
	I1123 09:08:23.459084  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:08:23.459126  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.477434  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:23.581514  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:08:23.600153  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1123 09:08:23.618496  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:08:23.636046  416838 provision.go:87] duration metric: took 248.305271ms to configureAuth
	I1123 09:08:23.636088  416838 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:08:23.636283  416838 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:23.636385  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:23.654572  416838 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:23.654811  416838 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33123 <nil> <nil>}
	I1123 09:08:23.654832  416838 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:08:23.984145  416838 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:08:23.984172  416838 machine.go:97] duration metric: took 4.12825365s to provisionDockerMachine
	I1123 09:08:23.984187  416838 start.go:293] postStartSetup for "default-k8s-diff-port-602386" (driver="docker")
	I1123 09:08:23.984200  416838 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:08:23.984274  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:08:23.984329  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.003375  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.114002  416838 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:08:24.118180  416838 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:08:24.118211  416838 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:08:24.118224  416838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:08:24.118326  416838 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:08:24.118419  416838 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:08:24.118523  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:08:24.128435  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:24.149446  416838 start.go:296] duration metric: took 165.240917ms for postStartSetup
	I1123 09:08:24.149541  416838 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:08:24.149581  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.168072  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.270207  416838 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:08:24.275101  416838 fix.go:56] duration metric: took 4.862787724s for fixHost
	I1123 09:08:24.275128  416838 start.go:83] releasing machines lock for "default-k8s-diff-port-602386", held for 4.862841676s
	I1123 09:08:24.275205  416838 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-602386
	I1123 09:08:24.293372  416838 ssh_runner.go:195] Run: cat /version.json
	I1123 09:08:24.293431  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.293446  416838 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:08:24.293515  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:24.311729  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.313129  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:24.410146  416838 ssh_runner.go:195] Run: systemctl --version
	I1123 09:08:24.467761  416838 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:08:24.503826  416838 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:08:24.508511  416838 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:08:24.508579  416838 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:08:24.516751  416838 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:08:24.516774  416838 start.go:496] detecting cgroup driver to use...
	I1123 09:08:24.516810  416838 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:08:24.516852  416838 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:08:24.531696  416838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:08:24.544238  416838 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:08:24.544282  416838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:08:24.558346  416838 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:08:24.571780  416838 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:08:24.656177  416838 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:08:24.740759  416838 docker.go:234] disabling docker service ...
	I1123 09:08:24.740833  416838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:08:24.756433  416838 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:08:24.770492  416838 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:08:24.851140  416838 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:08:24.935374  416838 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:08:24.949092  416838 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:08:24.963744  416838 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:08:24.963813  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:24.973107  416838 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:08:24.973177  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:24.984634  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:24.994166  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.003374  416838 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:08:25.012468  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.021656  416838 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.030254  416838 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:25.039295  416838 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:08:25.047048  416838 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:08:25.054577  416838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:25.140476  416838 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:08:25.287412  416838 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:08:25.287495  416838 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:08:25.291665  416838 start.go:564] Will wait 60s for crictl version
	I1123 09:08:25.291717  416838 ssh_runner.go:195] Run: which crictl
	I1123 09:08:25.295719  416838 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:08:25.328656  416838 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:08:25.328766  416838 ssh_runner.go:195] Run: crio --version
	I1123 09:08:25.363554  416838 ssh_runner.go:195] Run: crio --version
	I1123 09:08:25.395471  416838 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:08:25.396716  416838 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-602386 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:25.414755  416838 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1123 09:08:25.418979  416838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:25.429445  416838 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:08:25.429551  416838 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:25.429602  416838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:25.459534  416838 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:25.459557  416838 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:08:25.459609  416838 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:25.488196  416838 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:25.488219  416838 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:08:25.488229  416838 kubeadm.go:935] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1123 09:08:25.488358  416838 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-602386 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:08:25.488444  416838 ssh_runner.go:195] Run: crio config
	I1123 09:08:25.534326  416838 cni.go:84] Creating CNI manager for ""
	I1123 09:08:25.534344  416838 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:25.534361  416838 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1123 09:08:25.534383  416838 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-602386 NodeName:default-k8s-diff-port-602386 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:08:25.534496  416838 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-602386"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:08:25.534554  416838 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:08:25.542885  416838 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:08:25.542944  416838 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:08:25.550876  416838 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1123 09:08:25.564513  416838 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:08:25.577180  416838 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1123 09:08:25.589712  416838 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:08:25.593583  416838 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:25.604196  416838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:25.690276  416838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:25.715528  416838 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386 for IP: 192.168.94.2
	I1123 09:08:25.715549  416838 certs.go:195] generating shared ca certs ...
	I1123 09:08:25.715568  416838 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:25.715732  416838 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:08:25.715779  416838 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:08:25.715789  416838 certs.go:257] generating profile certs ...
	I1123 09:08:25.715870  416838 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/client.key
	I1123 09:08:25.715929  416838 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key.0582d586
	I1123 09:08:25.715998  416838 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key
	I1123 09:08:25.716111  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:08:25.716145  416838 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:08:25.716155  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:08:25.716181  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:08:25.716205  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:08:25.716228  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:08:25.716267  416838 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:25.716771  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:08:25.736220  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:08:25.755725  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:08:25.776235  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:08:25.799178  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1123 09:08:25.821636  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:08:25.848034  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:08:25.869417  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/default-k8s-diff-port-602386/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1123 09:08:25.887199  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:08:25.904702  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:08:25.923174  416838 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:08:25.940397  416838 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:08:25.952419  416838 ssh_runner.go:195] Run: openssl version
	I1123 09:08:25.958180  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:08:25.967552  416838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:25.971450  416838 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:25.971510  416838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:26.009313  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:08:26.019552  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:08:26.028412  416838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:08:26.032165  416838 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:08:26.032218  416838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:08:26.068981  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:08:26.077474  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:08:26.086084  416838 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:08:26.090076  416838 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:08:26.090130  416838 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:08:26.126340  416838 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:08:26.135135  416838 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:08:26.139603  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:08:26.183677  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:08:26.220937  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:08:26.271785  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:08:26.318696  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:08:26.379771  416838 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:08:26.427436  416838 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-602386 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-602386 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:26.427570  416838 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:08:26.427647  416838 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:08:26.464877  416838 cri.go:89] found id: "59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2"
	I1123 09:08:26.464901  416838 cri.go:89] found id: "1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613"
	I1123 09:08:26.464908  416838 cri.go:89] found id: "cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98"
	I1123 09:08:26.464912  416838 cri.go:89] found id: "88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64"
	I1123 09:08:26.464917  416838 cri.go:89] found id: ""
	I1123 09:08:26.465005  416838 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:08:26.480693  416838 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:26Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:26.480757  416838 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:08:26.489588  416838 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:08:26.489607  416838 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:08:26.489657  416838 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:08:26.499238  416838 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:08:26.500705  416838 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-602386" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:26.501824  416838 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-602386" cluster setting kubeconfig missing "default-k8s-diff-port-602386" context setting]
	I1123 09:08:26.503256  416838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:26.505808  416838 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:08:26.514307  416838 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1123 09:08:26.514338  416838 kubeadm.go:602] duration metric: took 24.725225ms to restartPrimaryControlPlane
	I1123 09:08:26.514347  416838 kubeadm.go:403] duration metric: took 86.921144ms to StartCluster
	I1123 09:08:26.514364  416838 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:26.514429  416838 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:26.516861  416838 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:26.517152  416838 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:08:26.517225  416838 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:08:26.517332  416838 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-602386"
	I1123 09:08:26.517354  416838 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-602386"
	W1123 09:08:26.517363  416838 addons.go:248] addon storage-provisioner should already be in state true
	I1123 09:08:26.517382  416838 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-602386"
	I1123 09:08:26.517403  416838 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-602386"
	I1123 09:08:26.517423  416838 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-602386"
	W1123 09:08:26.517434  416838 addons.go:248] addon dashboard should already be in state true
	I1123 09:08:26.517428  416838 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-602386"
	I1123 09:08:26.517468  416838 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:08:26.517394  416838 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:08:26.517637  416838 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:26.517780  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.518002  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.518186  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.519356  416838 out.go:179] * Verifying Kubernetes components...
	I1123 09:08:26.520536  416838 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:26.547074  416838 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-602386"
	W1123 09:08:26.547153  416838 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:08:26.547183  416838 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:08:26.547839  416838 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:08:26.548893  416838 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:08:26.549859  416838 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:08:26.551058  416838 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:26.551080  416838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:08:26.551136  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:26.551290  416838 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1123 09:08:22.480154  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	W1123 09:08:24.978718  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	W1123 09:08:26.979852  409946 pod_ready.go:104] pod "coredns-66bc5c9577-dhxwz" is not "Ready", error: <nil>
	I1123 09:08:23.062550  415250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:08:23.067308  415250 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:08:23.067334  415250 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:08:23.561959  415250 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1123 09:08:23.566196  415250 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1123 09:08:23.567141  415250 api_server.go:141] control plane version: v1.34.1
	I1123 09:08:23.567167  415250 api_server.go:131] duration metric: took 1.005360807s to wait for apiserver health ...
	I1123 09:08:23.567176  415250 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:08:23.570684  415250 system_pods.go:59] 8 kube-system pods found
	I1123 09:08:23.570712  415250 system_pods.go:61] "coredns-66bc5c9577-k4bmj" [0676d3db-d11b-433f-9c17-6131468d109d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:08:23.570720  415250 system_pods.go:61] "etcd-embed-certs-529341" [3a0211ec-d796-4eec-82d3-6599cb786897] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:08:23.570726  415250 system_pods.go:61] "kindnet-twlcq" [45682d16-1f1e-4733-8a6b-31cf7cdfa5bd] Running
	I1123 09:08:23.570733  415250 system_pods.go:61] "kube-apiserver-embed-certs-529341" [51301aaf-4d05-41b4-b9c6-8ba22416a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:08:23.570739  415250 system_pods.go:61] "kube-controller-manager-embed-certs-529341" [7538c458-808e-4018-b566-af01d924edee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:08:23.570746  415250 system_pods.go:61] "kube-proxy-xfwhk" [86a6640d-80fe-45a3-b48b-d2577d222ccf] Running
	I1123 09:08:23.570751  415250 system_pods.go:61] "kube-scheduler-embed-certs-529341" [8d0a8add-2bc8-4811-a1ac-a6c8d6d8273e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:08:23.570757  415250 system_pods.go:61] "storage-provisioner" [c60e7298-2b0f-49f5-afde-b97e4bc8287d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:08:23.570764  415250 system_pods.go:74] duration metric: took 3.583909ms to wait for pod list to return data ...
	I1123 09:08:23.570772  415250 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:08:23.573041  415250 default_sa.go:45] found service account: "default"
	I1123 09:08:23.573059  415250 default_sa.go:55] duration metric: took 2.281538ms for default service account to be created ...
	I1123 09:08:23.573067  415250 system_pods.go:116] waiting for k8s-apps to be running ...
	I1123 09:08:23.576026  415250 system_pods.go:86] 8 kube-system pods found
	I1123 09:08:23.576057  415250 system_pods.go:89] "coredns-66bc5c9577-k4bmj" [0676d3db-d11b-433f-9c17-6131468d109d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1123 09:08:23.576071  415250 system_pods.go:89] "etcd-embed-certs-529341" [3a0211ec-d796-4eec-82d3-6599cb786897] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:08:23.576084  415250 system_pods.go:89] "kindnet-twlcq" [45682d16-1f1e-4733-8a6b-31cf7cdfa5bd] Running
	I1123 09:08:23.576094  415250 system_pods.go:89] "kube-apiserver-embed-certs-529341" [51301aaf-4d05-41b4-b9c6-8ba22416a628] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:08:23.576107  415250 system_pods.go:89] "kube-controller-manager-embed-certs-529341" [7538c458-808e-4018-b566-af01d924edee] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:08:23.576114  415250 system_pods.go:89] "kube-proxy-xfwhk" [86a6640d-80fe-45a3-b48b-d2577d222ccf] Running
	I1123 09:08:23.576123  415250 system_pods.go:89] "kube-scheduler-embed-certs-529341" [8d0a8add-2bc8-4811-a1ac-a6c8d6d8273e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:08:23.576131  415250 system_pods.go:89] "storage-provisioner" [c60e7298-2b0f-49f5-afde-b97e4bc8287d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1123 09:08:23.576141  415250 system_pods.go:126] duration metric: took 3.068556ms to wait for k8s-apps to be running ...
	I1123 09:08:23.576155  415250 system_svc.go:44] waiting for kubelet service to be running ....
	I1123 09:08:23.576207  415250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:23.589656  415250 system_svc.go:56] duration metric: took 13.492988ms WaitForService to wait for kubelet
	I1123 09:08:23.589687  415250 kubeadm.go:587] duration metric: took 3.661095272s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1123 09:08:23.589706  415250 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:08:23.592613  415250 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:08:23.592642  415250 node_conditions.go:123] node cpu capacity is 8
	I1123 09:08:23.592661  415250 node_conditions.go:105] duration metric: took 2.94425ms to run NodePressure ...
	I1123 09:08:23.592676  415250 start.go:242] waiting for startup goroutines ...
	I1123 09:08:23.592689  415250 start.go:247] waiting for cluster config update ...
	I1123 09:08:23.592708  415250 start.go:256] writing updated cluster config ...
	I1123 09:08:23.593078  415250 ssh_runner.go:195] Run: rm -f paused
	I1123 09:08:23.596792  415250 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:08:23.600652  415250 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-k4bmj" in "kube-system" namespace to be "Ready" or be gone ...
	W1123 09:08:25.606773  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:27.610923  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:26.555651  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:08:26.555684  416838 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:08:26.555750  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:26.584503  416838 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:26.584535  416838 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:08:26.584824  416838 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:08:26.589930  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:26.590627  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:26.615366  416838 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:08:26.684290  416838 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:26.701472  416838 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-602386" to be "Ready" ...
	I1123 09:08:26.716057  416838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:08:26.718579  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:08:26.718616  416838 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:08:26.737734  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:08:26.737751  416838 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:08:26.750096  416838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:08:26.758250  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:08:26.758613  416838 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:08:26.782936  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:08:26.782964  416838 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:08:26.811228  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:08:26.811260  416838 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:08:26.841566  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:08:26.841592  416838 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:08:26.857272  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:08:26.857295  416838 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:08:26.871688  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:08:26.871709  416838 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:08:26.886574  416838 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:26.886600  416838 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:08:26.904626  416838 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:08:28.103222  416838 node_ready.go:49] node "default-k8s-diff-port-602386" is "Ready"
	I1123 09:08:28.103257  416838 node_ready.go:38] duration metric: took 1.401750447s for node "default-k8s-diff-port-602386" to be "Ready" ...
	I1123 09:08:28.103273  416838 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:08:28.103334  416838 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:08:28.908407  416838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.158065804s)
	I1123 09:08:28.908505  416838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.19241101s)
	I1123 09:08:28.908675  416838 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.004006685s)
	I1123 09:08:28.908751  416838 api_server.go:72] duration metric: took 2.391563771s to wait for apiserver process to appear ...
	I1123 09:08:28.908816  416838 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:08:28.908854  416838 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1123 09:08:28.910400  416838 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-602386 addons enable metrics-server
	
	I1123 09:08:28.916503  416838 api_server.go:279] https://192.168.94.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:08:28.916668  416838 api_server.go:103] status: https://192.168.94.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:08:28.926483  416838 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:08:28.928025  416838 addons.go:530] duration metric: took 2.410799195s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	
	
	==> CRI-O <==
	Nov 23 09:08:00 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:00.59068935Z" level=info msg="Created container eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=565e9a53-6de1-4397-b8b7-9d0a50ed5e86 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:00 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:00.591576377Z" level=info msg="Starting container: eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4" id=29ccfd69-f4fd-404e-9554-40e5050bd1b5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:00 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:00.593443161Z" level=info msg="Started container" PID=1752 containerID=eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper id=29ccfd69-f4fd-404e-9554-40e5050bd1b5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4c2b6e0482b4f41e32dc3d7c1716091dfcbd2816e26c4196e396a00c3918b5e
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.039564163Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=55feba0a-35d1-48e6-a4f6-ad35e70dcc70 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.042549561Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5f8085a0-2bdb-4e38-bade-cded56165ba8 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.04577461Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=dc6f11a9-2301-4e10-9bba-fdd01cebd989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.045929164Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.055138745Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.055636294Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.087055255Z" level=info msg="Created container 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=dc6f11a9-2301-4e10-9bba-fdd01cebd989 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.087677104Z" level=info msg="Starting container: 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811" id=a86c883f-3d79-4a48-96d1-947daf12b50d name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:01 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:01.089999987Z" level=info msg="Started container" PID=1763 containerID=934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper id=a86c883f-3d79-4a48-96d1-947daf12b50d name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4c2b6e0482b4f41e32dc3d7c1716091dfcbd2816e26c4196e396a00c3918b5e
	Nov 23 09:08:02 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:02.044546049Z" level=info msg="Removing container: eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4" id=65901194-9fe2-4899-897d-02df1391a9eb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:02 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:02.05573717Z" level=info msg="Removed container eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=65901194-9fe2-4899-897d-02df1391a9eb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.961293844Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=de0956d0-87da-491e-888c-07053fffc53e name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.962301452Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0a4b3181-f0ac-49e2-8a34-5cfd2418f1ea name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.963386968Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=738da8b9-2403-4130-9749-0762359a377f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.963530618Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.970033328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.970529198Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.99903147Z" level=info msg="Created container f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=738da8b9-2403-4130-9749-0762359a377f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:17 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:17.999530887Z" level=info msg="Starting container: f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c" id=0d234516-d4c6-451e-9041-a45e9bc8f09a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:18 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:18.001618074Z" level=info msg="Started container" PID=1797 containerID=f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper id=0d234516-d4c6-451e-9041-a45e9bc8f09a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f4c2b6e0482b4f41e32dc3d7c1716091dfcbd2816e26c4196e396a00c3918b5e
	Nov 23 09:08:18 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:18.087727548Z" level=info msg="Removing container: 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811" id=a14f0573-f338-4877-928f-a8c5eadb5a61 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:18 old-k8s-version-054094 crio[569]: time="2025-11-23T09:08:18.097280137Z" level=info msg="Removed container 934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc/dashboard-metrics-scraper" id=a14f0573-f338-4877-928f-a8c5eadb5a61 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	f0510ef795a2e       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   f4c2b6e0482b4       dashboard-metrics-scraper-5f989dc9cf-262tc       kubernetes-dashboard
	b7902f0397bf0       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   34 seconds ago      Running             kubernetes-dashboard        0                   566c1fbcb017a       kubernetes-dashboard-8694d4445c-smgkc            kubernetes-dashboard
	a90702afed2b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Running             storage-provisioner         1                   ee93b9758f5ff       storage-provisioner                              kube-system
	629c8538dd18c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           51 seconds ago      Running             coredns                     0                   c542cd613e7cc       coredns-5dd5756b68-whp8m                         kube-system
	9c7464426ad54       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   7918e2f9585e5       busybox                                          default
	f32d9a2f7dcfa       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           51 seconds ago      Running             kube-proxy                  0                   f21c1d3e84905       kube-proxy-9crnb                                 kube-system
	3a5035af2c25e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   ee93b9758f5ff       storage-provisioner                              kube-system
	5e74dbebbc2f0       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   1557ce1639fb0       kindnet-fhw8w                                    kube-system
	67da9dae46c0f       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           53 seconds ago      Running             kube-apiserver              0                   14b2fc7685cf4       kube-apiserver-old-k8s-version-054094            kube-system
	c7dc1d98ec4da       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           53 seconds ago      Running             kube-controller-manager     0                   7ebe1c59fde3e       kube-controller-manager-old-k8s-version-054094   kube-system
	67cdf9a216a06       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           53 seconds ago      Running             kube-scheduler              0                   2f54d0e1541f9       kube-scheduler-old-k8s-version-054094            kube-system
	f308bae766722       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           53 seconds ago      Running             etcd                        0                   c225f0b83cf95       etcd-old-k8s-version-054094                      kube-system
	
	
	==> coredns [629c8538dd18c46925238739061bb0f44ca62dc0ae653a849f5a698e44652b68] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = b7aacdf6a6aa730aafe4d018cac9b7b5ecfb346cba84a99f64521f87aef8b4958639c1cf97967716465791d05bd38f372615327b7cb1d93c850bae532744d54d
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:52083 - 45688 "HINFO IN 8643529580468224209.8226291599787353716. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.127089334s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-054094
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-054094
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=old-k8s-version-054094
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_06_33_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:06:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-054094
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:08:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:08:11 +0000   Sun, 23 Nov 2025 09:06:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    old-k8s-version-054094
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                e0f1f612-a814-499c-889a-0902ab6fee2d
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-5dd5756b68-whp8m                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-054094                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-fhw8w                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-054094             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-old-k8s-version-054094    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-9crnb                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-054094             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-262tc        0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-smgkc             0 (0%)        0 (0%)      0 (0%)           0 (0%)         39s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m6s (x8 over 2m6s)  kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node old-k8s-version-054094 event: Registered Node old-k8s-version-054094 in Controller
	  Normal  NodeReady                94s                  kubelet          Node old-k8s-version-054094 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 55s)    kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 55s)    kubelet          Node old-k8s-version-054094 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 55s)    kubelet          Node old-k8s-version-054094 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           39s                  node-controller  Node old-k8s-version-054094 event: Registered Node old-k8s-version-054094 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [f308bae766722fb5efa2c7d1616cb7025893f5d7f71c748c3370f5085550daeb] <==
	{"level":"info","ts":"2025-11-23T09:07:38.557276Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-23T09:07:38.557577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 switched to configuration voters=(16896983918768216326)"}
	{"level":"info","ts":"2025-11-23T09:07:38.55839Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","added-peer-id":"ea7e25599daad906","added-peer-peer-urls":["https://192.168.76.2:2380"]}
	{"level":"info","ts":"2025-11-23T09:07:38.558526Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"6f20f2c4b2fb5f8a","local-member-id":"ea7e25599daad906","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:07:38.558565Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-23T09:07:38.559994Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-23T09:07:38.560127Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T09:07:38.560166Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-11-23T09:07:38.560247Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"ea7e25599daad906","initial-advertise-peer-urls":["https://192.168.76.2:2380"],"listen-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.76.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-23T09:07:38.560276Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-23T09:07:39.847907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-23T09:07:39.847961Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-23T09:07:39.848002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgPreVoteResp from ea7e25599daad906 at term 2"}
	{"level":"info","ts":"2025-11-23T09:07:39.848017Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became candidate at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.848026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 received MsgVoteResp from ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.848038Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"ea7e25599daad906 became leader at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.848068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: ea7e25599daad906 elected leader ea7e25599daad906 at term 3"}
	{"level":"info","ts":"2025-11-23T09:07:39.850353Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"ea7e25599daad906","local-member-attributes":"{Name:old-k8s-version-054094 ClientURLs:[https://192.168.76.2:2379]}","request-path":"/0/members/ea7e25599daad906/attributes","cluster-id":"6f20f2c4b2fb5f8a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-23T09:07:39.850357Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T09:07:39.850378Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-23T09:07:39.850636Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-23T09:07:39.850678Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-23T09:07:39.851471Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.76.2:2379"}
	{"level":"info","ts":"2025-11-23T09:07:39.851542Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-23T09:08:12.155448Z","caller":"traceutil/trace.go:171","msg":"trace[1768020708] transaction","detail":"{read_only:false; response_revision:646; number_of_response:1; }","duration":"126.759793ms","start":"2025-11-23T09:08:12.02865Z","end":"2025-11-23T09:08:12.15541Z","steps":["trace[1768020708] 'process raft request'  (duration: 56.026699ms)","trace[1768020708] 'compare'  (duration: 70.593972ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:08:32 up  1:50,  0 user,  load average: 7.66, 4.84, 2.95
	Linux old-k8s-version-054094 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [5e74dbebbc2f09c92cc8f26c86f4a178da062712bef7fb2aa891abaf9d0ef753] <==
	I1123 09:07:41.609620       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:07:41.609901       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:07:41.610135       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:07:41.610156       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:07:41.610185       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:07:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:07:41.812878       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:07:41.831991       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:07:41.832125       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:07:41.832733       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:07:42.133146       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:07:42.133188       1 metrics.go:72] Registering metrics
	I1123 09:07:42.133272       1 controller.go:711] "Syncing nftables rules"
	I1123 09:07:51.813170       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:07:51.813228       1 main.go:301] handling current node
	I1123 09:08:01.813668       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:08:01.813737       1 main.go:301] handling current node
	I1123 09:08:11.820029       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:08:11.820063       1 main.go:301] handling current node
	I1123 09:08:21.815054       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:08:21.815112       1 main.go:301] handling current node
	I1123 09:08:31.819096       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1123 09:08:31.819128       1 main.go:301] handling current node
	
	
	==> kube-apiserver [67da9dae46c0f7bf57f9dc994797c8788e6a957999afabdd876c802e5872cb68] <==
	I1123 09:07:40.849518       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1123 09:07:40.849672       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1123 09:07:40.849696       1 aggregator.go:166] initial CRD sync complete...
	I1123 09:07:40.849703       1 autoregister_controller.go:141] Starting autoregister controller
	I1123 09:07:40.849709       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:07:40.849715       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:07:40.850158       1 shared_informer.go:318] Caches are synced for configmaps
	E1123 09:07:40.850322       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	I1123 09:07:40.897469       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1123 09:07:41.748399       1 controller.go:624] quota admission added evaluator for: namespaces
	I1123 09:07:41.752625       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:07:41.780777       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1123 09:07:41.801128       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:07:41.810087       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:07:41.818582       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1123 09:07:41.855317       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.138.60"}
	I1123 09:07:41.868717       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.50.113"}
	E1123 09:07:50.849747       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I1123 09:07:53.317312       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:07:53.319804       1 controller.go:624] quota admission added evaluator for: endpoints
	I1123 09:07:53.415359       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	E1123 09:08:00.851052       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1123 09:08:10.851489       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1123 09:08:20.852197       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1123 09:08:30.853164       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [c7dc1d98ec4da99d3a0764984d5923c598972517dad05844b7805b9388bb5cc9] <==
	I1123 09:07:53.453576       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-smgkc"
	I1123 09:07:53.454078       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-262tc"
	I1123 09:07:53.464090       1 shared_informer.go:318] Caches are synced for ReplicationController
	I1123 09:07:53.468083       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="40.231975ms"
	I1123 09:07:53.472760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="44.632289ms"
	I1123 09:07:53.480761       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="11.874826ms"
	I1123 09:07:53.481643       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="68.246µs"
	I1123 09:07:53.485745       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="100.929µs"
	I1123 09:07:53.492085       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="19.20144ms"
	I1123 09:07:53.492294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="85.205µs"
	I1123 09:07:53.497645       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.15µs"
	I1123 09:07:53.499125       1 shared_informer.go:318] Caches are synced for resource quota
	I1123 09:07:53.514855       1 shared_informer.go:318] Caches are synced for disruption
	I1123 09:07:53.857266       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:07:53.925851       1 shared_informer.go:318] Caches are synced for garbage collector
	I1123 09:07:53.925885       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1123 09:07:59.125193       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.675474ms"
	I1123 09:07:59.125354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="112.399µs"
	I1123 09:08:01.054336       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="62.05µs"
	I1123 09:08:02.057730       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="128.84µs"
	I1123 09:08:03.059539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.67µs"
	I1123 09:08:12.246908       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.850207ms"
	I1123 09:08:12.247054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="97.987µs"
	I1123 09:08:18.096954       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="69.043µs"
	I1123 09:08:23.786825       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.818µs"
	
	
	==> kube-proxy [f32d9a2f7dcfa4d5ba236560662bb95e5ec188a673b28df770ec09f9d9c6aac9] <==
	I1123 09:07:41.428876       1 server_others.go:69] "Using iptables proxy"
	I1123 09:07:41.441378       1 node.go:141] Successfully retrieved node IP: 192.168.76.2
	I1123 09:07:41.463757       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:41.466437       1 server_others.go:152] "Using iptables Proxier"
	I1123 09:07:41.466475       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1123 09:07:41.466482       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1123 09:07:41.466511       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1123 09:07:41.466720       1 server.go:846] "Version info" version="v1.28.0"
	I1123 09:07:41.466733       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:41.468423       1 config.go:188] "Starting service config controller"
	I1123 09:07:41.468435       1 config.go:97] "Starting endpoint slice config controller"
	I1123 09:07:41.468843       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1123 09:07:41.468842       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1123 09:07:41.468622       1 config.go:315] "Starting node config controller"
	I1123 09:07:41.468982       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1123 09:07:41.570637       1 shared_informer.go:318] Caches are synced for node config
	I1123 09:07:41.570661       1 shared_informer.go:318] Caches are synced for service config
	I1123 09:07:41.570671       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [67cdf9a216a06c548df986856a47cb4952575cfc9b63188445c10205400e34be] <==
	I1123 09:07:38.979386       1 serving.go:348] Generated self-signed cert in-memory
	W1123 09:07:40.789289       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:07:40.789345       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:07:40.789358       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:07:40.789367       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:07:40.826776       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1123 09:07:40.828245       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:40.830780       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:40.830820       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1123 09:07:40.832079       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1123 09:07:40.832340       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1123 09:07:40.931459       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.458991     737 topology_manager.go:215] "Topology Admit Handler" podUID="9aeb7744-7444-4754-a199-8a503b630d8b" podNamespace="kubernetes-dashboard" podName="kubernetes-dashboard-8694d4445c-smgkc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.475689     737 topology_manager.go:215] "Topology Admit Handler" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b" podNamespace="kubernetes-dashboard" podName="dashboard-metrics-scraper-5f989dc9cf-262tc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583183     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/5765b029-c0f4-4dd5-b495-7744f5cb301b-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-262tc\" (UID: \"5765b029-c0f4-4dd5-b495-7744f5cb301b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583245     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9aeb7744-7444-4754-a199-8a503b630d8b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-smgkc\" (UID: \"9aeb7744-7444-4754-a199-8a503b630d8b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-smgkc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583279     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xll5p\" (UniqueName: \"kubernetes.io/projected/9aeb7744-7444-4754-a199-8a503b630d8b-kube-api-access-xll5p\") pod \"kubernetes-dashboard-8694d4445c-smgkc\" (UID: \"9aeb7744-7444-4754-a199-8a503b630d8b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-smgkc"
	Nov 23 09:07:53 old-k8s-version-054094 kubelet[737]: I1123 09:07:53.583361     737 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6s9r2\" (UniqueName: \"kubernetes.io/projected/5765b029-c0f4-4dd5-b495-7744f5cb301b-kube-api-access-6s9r2\") pod \"dashboard-metrics-scraper-5f989dc9cf-262tc\" (UID: \"5765b029-c0f4-4dd5-b495-7744f5cb301b\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc"
	Nov 23 09:08:01 old-k8s-version-054094 kubelet[737]: I1123 09:08:01.038951     737 scope.go:117] "RemoveContainer" containerID="eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4"
	Nov 23 09:08:01 old-k8s-version-054094 kubelet[737]: I1123 09:08:01.054455     737 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-smgkc" podStartSLOduration=3.615675785 podCreationTimestamp="2025-11-23 09:07:53 +0000 UTC" firstStartedPulling="2025-11-23 09:07:53.79480745 +0000 UTC m=+15.923410308" lastFinishedPulling="2025-11-23 09:07:58.233521345 +0000 UTC m=+20.362124192" observedRunningTime="2025-11-23 09:07:59.082027409 +0000 UTC m=+21.210630275" watchObservedRunningTime="2025-11-23 09:08:01.054389669 +0000 UTC m=+23.182992537"
	Nov 23 09:08:02 old-k8s-version-054094 kubelet[737]: I1123 09:08:02.043224     737 scope.go:117] "RemoveContainer" containerID="eea74c901a931c4c28afb3b36f920404d19d8624dd9a2280c87c7e0a4c6619e4"
	Nov 23 09:08:02 old-k8s-version-054094 kubelet[737]: I1123 09:08:02.043416     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:02 old-k8s-version-054094 kubelet[737]: E1123 09:08:02.043786     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:03 old-k8s-version-054094 kubelet[737]: I1123 09:08:03.047484     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:03 old-k8s-version-054094 kubelet[737]: E1123 09:08:03.047893     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:04 old-k8s-version-054094 kubelet[737]: I1123 09:08:04.049917     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:04 old-k8s-version-054094 kubelet[737]: E1123 09:08:04.050327     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:17 old-k8s-version-054094 kubelet[737]: I1123 09:08:17.960696     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:18 old-k8s-version-054094 kubelet[737]: I1123 09:08:18.086431     737 scope.go:117] "RemoveContainer" containerID="934eaad0aab1d0aea88e8660735c9268cdac9f5f4eae38e8daceba28ba691811"
	Nov 23 09:08:18 old-k8s-version-054094 kubelet[737]: I1123 09:08:18.086678     737 scope.go:117] "RemoveContainer" containerID="f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c"
	Nov 23 09:08:18 old-k8s-version-054094 kubelet[737]: E1123 09:08:18.087049     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:23 old-k8s-version-054094 kubelet[737]: I1123 09:08:23.777427     737 scope.go:117] "RemoveContainer" containerID="f0510ef795a2e0b5c70d3d975ff8094ef772658377dd866efff16426b9ceed2c"
	Nov 23 09:08:23 old-k8s-version-054094 kubelet[737]: E1123 09:08:23.777801     737 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-262tc_kubernetes-dashboard(5765b029-c0f4-4dd5-b495-7744f5cb301b)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-262tc" podUID="5765b029-c0f4-4dd5-b495-7744f5cb301b"
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:08:27 old-k8s-version-054094 systemd[1]: kubelet.service: Consumed 1.466s CPU time.
	
	
	==> kubernetes-dashboard [b7902f0397bf02fb653af022bdad06aea40eb13c6da9af1435a515c5ad12d0e1] <==
	2025/11/23 09:07:58 Using namespace: kubernetes-dashboard
	2025/11/23 09:07:58 Using in-cluster config to connect to apiserver
	2025/11/23 09:07:58 Using secret token for csrf signing
	2025/11/23 09:07:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:07:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:07:58 Successful initial request to the apiserver, version: v1.28.0
	2025/11/23 09:07:58 Generating JWE encryption key
	2025/11/23 09:07:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:07:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:07:58 Initializing JWE encryption key from synchronized object
	2025/11/23 09:07:58 Creating in-cluster Sidecar client
	2025/11/23 09:07:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:07:58 Serving insecurely on HTTP port: 9090
	2025/11/23 09:08:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:07:58 Starting overwatch
	
	
	==> storage-provisioner [3a5035af2c25e9076b679fe308a44f43a32681ba1653ba021cc6294822caf7f9] <==
	I1123 09:07:41.381987       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:07:41.388611       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [a90702afed2b040fbd77498ec11afee60f27d3a30d65069dea6e6961e8118621] <==
	I1123 09:07:42.036854       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1123 09:07:42.044276       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1123 09:07:42.044310       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1123 09:07:59.444577       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1123 09:07:59.444712       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c3dec40d-0ff5-42c0-b2b8-e87a7b713465", APIVersion:"v1", ResourceVersion:"616", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-054094_3fb7a2a9-4954-480e-a2ce-90d7562fdeac became leader
	I1123 09:07:59.444721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054094_3fb7a2a9-4954-480e-a2ce-90d7562fdeac!
	I1123 09:07:59.544865       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-054094_3fb7a2a9-4954-480e-a2ce-90d7562fdeac!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054094 -n old-k8s-version-054094
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-054094 -n old-k8s-version-054094: exit status 2 (439.275347ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-054094 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-619589 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-619589 --alsologtostderr -v=1: exit status 80 (1.82722358s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-619589 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:08:49.286920  424563 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:08:49.287202  424563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:49.287213  424563 out.go:374] Setting ErrFile to fd 2...
	I1123 09:08:49.287219  424563 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:49.287452  424563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:08:49.287722  424563 out.go:368] Setting JSON to false
	I1123 09:08:49.287749  424563 mustload.go:66] Loading cluster: no-preload-619589
	I1123 09:08:49.288172  424563 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:49.288598  424563 cli_runner.go:164] Run: docker container inspect no-preload-619589 --format={{.State.Status}}
	I1123 09:08:49.305881  424563 host.go:66] Checking if "no-preload-619589" exists ...
	I1123 09:08:49.306172  424563 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:49.372308  424563 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-23 09:08:49.362058794 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:49.373161  424563 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-619589 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:08:49.374999  424563 out.go:179] * Pausing node no-preload-619589 ... 
	I1123 09:08:49.376097  424563 host.go:66] Checking if "no-preload-619589" exists ...
	I1123 09:08:49.376366  424563 ssh_runner.go:195] Run: systemctl --version
	I1123 09:08:49.376409  424563 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-619589
	I1123 09:08:49.394652  424563 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33113 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/no-preload-619589/id_rsa Username:docker}
	I1123 09:08:49.495826  424563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:49.518412  424563 pause.go:52] kubelet running: true
	I1123 09:08:49.518506  424563 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:08:49.689754  424563 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:08:49.689839  424563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:08:49.764721  424563 cri.go:89] found id: "23f1cd486ac33874f452790b2608401eddaa4a3bd8f96430c807fcfb5e1937b0"
	I1123 09:08:49.764746  424563 cri.go:89] found id: "7941decb9aa0e2ebe4de99bf8450f512bdbb39ec2aa3306ef5a8400615d2d659"
	I1123 09:08:49.764752  424563 cri.go:89] found id: "b6302a52b3f07d041a82fd1384e80a24771f24468d8b556d23d35c13521bfcd3"
	I1123 09:08:49.764758  424563 cri.go:89] found id: "f2f44d09fd70f07bb62953d4d3a45b5459cdef60ec014cf96fd80a6ed19a134b"
	I1123 09:08:49.764761  424563 cri.go:89] found id: "14ae20126a459a7fdf582ec5a271d47e3dca1e142c4ebf9e0350dd559be93573"
	I1123 09:08:49.764766  424563 cri.go:89] found id: "a3bc253f74d935c63450cd3db07c274df85d3f1746da99b79e94bf15141d4c16"
	I1123 09:08:49.764769  424563 cri.go:89] found id: "6ac3ed6ad22f96a5e8a6803a48c463751843af2805ec1400ba36fedc144cf1d9"
	I1123 09:08:49.764773  424563 cri.go:89] found id: "9b89533199bb2186454a2491d3cdd6e0a13a98d889f1739695a869ff190a6ad7"
	I1123 09:08:49.764778  424563 cri.go:89] found id: "1f60fb31039bdce86058df87c7da04ea74adbafc6e245568fb6ab0413a0af065"
	I1123 09:08:49.764807  424563 cri.go:89] found id: "3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	I1123 09:08:49.764816  424563 cri.go:89] found id: "c7be853bc6291068babb574a6fed0026a725056d23096bb61e1d6ffc9a4a6fa1"
	I1123 09:08:49.764821  424563 cri.go:89] found id: ""
	I1123 09:08:49.764868  424563 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:08:49.776848  424563 retry.go:31] will retry after 364.095284ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:49Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:50.141180  424563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:50.154441  424563 pause.go:52] kubelet running: false
	I1123 09:08:50.154502  424563 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:08:50.293249  424563 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:08:50.293350  424563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:08:50.362111  424563 cri.go:89] found id: "23f1cd486ac33874f452790b2608401eddaa4a3bd8f96430c807fcfb5e1937b0"
	I1123 09:08:50.362138  424563 cri.go:89] found id: "7941decb9aa0e2ebe4de99bf8450f512bdbb39ec2aa3306ef5a8400615d2d659"
	I1123 09:08:50.362143  424563 cri.go:89] found id: "b6302a52b3f07d041a82fd1384e80a24771f24468d8b556d23d35c13521bfcd3"
	I1123 09:08:50.362147  424563 cri.go:89] found id: "f2f44d09fd70f07bb62953d4d3a45b5459cdef60ec014cf96fd80a6ed19a134b"
	I1123 09:08:50.362150  424563 cri.go:89] found id: "14ae20126a459a7fdf582ec5a271d47e3dca1e142c4ebf9e0350dd559be93573"
	I1123 09:08:50.362154  424563 cri.go:89] found id: "a3bc253f74d935c63450cd3db07c274df85d3f1746da99b79e94bf15141d4c16"
	I1123 09:08:50.362157  424563 cri.go:89] found id: "6ac3ed6ad22f96a5e8a6803a48c463751843af2805ec1400ba36fedc144cf1d9"
	I1123 09:08:50.362162  424563 cri.go:89] found id: "9b89533199bb2186454a2491d3cdd6e0a13a98d889f1739695a869ff190a6ad7"
	I1123 09:08:50.362166  424563 cri.go:89] found id: "1f60fb31039bdce86058df87c7da04ea74adbafc6e245568fb6ab0413a0af065"
	I1123 09:08:50.362181  424563 cri.go:89] found id: "3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	I1123 09:08:50.362190  424563 cri.go:89] found id: "c7be853bc6291068babb574a6fed0026a725056d23096bb61e1d6ffc9a4a6fa1"
	I1123 09:08:50.362193  424563 cri.go:89] found id: ""
	I1123 09:08:50.362233  424563 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:08:50.373797  424563 retry.go:31] will retry after 414.847114ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:50Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:08:50.789180  424563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:08:50.802832  424563 pause.go:52] kubelet running: false
	I1123 09:08:50.802880  424563 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:08:50.961560  424563 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:08:50.961641  424563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:08:51.029594  424563 cri.go:89] found id: "23f1cd486ac33874f452790b2608401eddaa4a3bd8f96430c807fcfb5e1937b0"
	I1123 09:08:51.029628  424563 cri.go:89] found id: "7941decb9aa0e2ebe4de99bf8450f512bdbb39ec2aa3306ef5a8400615d2d659"
	I1123 09:08:51.029640  424563 cri.go:89] found id: "b6302a52b3f07d041a82fd1384e80a24771f24468d8b556d23d35c13521bfcd3"
	I1123 09:08:51.029647  424563 cri.go:89] found id: "f2f44d09fd70f07bb62953d4d3a45b5459cdef60ec014cf96fd80a6ed19a134b"
	I1123 09:08:51.029650  424563 cri.go:89] found id: "14ae20126a459a7fdf582ec5a271d47e3dca1e142c4ebf9e0350dd559be93573"
	I1123 09:08:51.029654  424563 cri.go:89] found id: "a3bc253f74d935c63450cd3db07c274df85d3f1746da99b79e94bf15141d4c16"
	I1123 09:08:51.029657  424563 cri.go:89] found id: "6ac3ed6ad22f96a5e8a6803a48c463751843af2805ec1400ba36fedc144cf1d9"
	I1123 09:08:51.029660  424563 cri.go:89] found id: "9b89533199bb2186454a2491d3cdd6e0a13a98d889f1739695a869ff190a6ad7"
	I1123 09:08:51.029669  424563 cri.go:89] found id: "1f60fb31039bdce86058df87c7da04ea74adbafc6e245568fb6ab0413a0af065"
	I1123 09:08:51.029676  424563 cri.go:89] found id: "3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	I1123 09:08:51.029679  424563 cri.go:89] found id: "c7be853bc6291068babb574a6fed0026a725056d23096bb61e1d6ffc9a4a6fa1"
	I1123 09:08:51.029682  424563 cri.go:89] found id: ""
	I1123 09:08:51.029735  424563 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:08:51.044125  424563 out.go:203] 
	W1123 09:08:51.045182  424563 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:08:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:08:51.045200  424563 out.go:285] * 
	* 
	W1123 09:08:51.051997  424563 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:08:51.053187  424563 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-619589 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-619589
helpers_test.go:243: (dbg) docker inspect no-preload-619589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328",
	        "Created": "2025-11-23T09:06:25.102316496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 410146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:07:47.679820701Z",
	            "FinishedAt": "2025-11-23T09:07:46.740204172Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/hostname",
	        "HostsPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/hosts",
	        "LogPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328-json.log",
	        "Name": "/no-preload-619589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-619589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-619589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328",
	                "LowerDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-619589",
	                "Source": "/var/lib/docker/volumes/no-preload-619589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-619589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-619589",
	                "name.minikube.sigs.k8s.io": "no-preload-619589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "50158894eec67c15e3f74a020c71aa5ecece1c5b8566b6f9dc3866697ddc936e",
	            "SandboxKey": "/var/run/docker/netns/50158894eec6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-619589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40c67f27f7925004fe92866c39e8b5aa93f9532071ca8f095a0bf7fb3ffde5bf",
	                    "EndpointID": "54652c8b7de63fe9959a1dd3d45b5a50d54525a366635e51c8e478877b2439c3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "b2:32:65:27:5f:e5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-619589",
	                        "75a170393553"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589: exit status 2 (340.511512ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-619589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-619589 logs -n 25: (1.110245691s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                                                                                               │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p no-preload-619589 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:08:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:08:38.063057  422371 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:08:38.063185  422371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:38.063194  422371 out.go:374] Setting ErrFile to fd 2...
	I1123 09:08:38.063199  422371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:38.063491  422371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:08:38.064118  422371 out.go:368] Setting JSON to false
	I1123 09:08:38.065952  422371 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6658,"bootTime":1763882260,"procs":454,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:08:38.066040  422371 start.go:143] virtualization: kvm guest
	I1123 09:08:38.068178  422371 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:08:38.069546  422371 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:08:38.069540  422371 notify.go:221] Checking for updates...
	I1123 09:08:38.071773  422371 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:08:38.073033  422371 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:38.078218  422371 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:08:38.079577  422371 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:08:38.080792  422371 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:08:38.082709  422371 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.082880  422371 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.083058  422371 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.083206  422371 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:08:38.112463  422371 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:08:38.112578  422371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:38.187928  422371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:08:38.174595805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:38.188138  422371 docker.go:319] overlay module found
	I1123 09:08:38.190297  422371 out.go:179] * Using the docker driver based on user configuration
	I1123 09:08:38.194907  422371 start.go:309] selected driver: docker
	I1123 09:08:38.194937  422371 start.go:927] validating driver "docker" against <nil>
	I1123 09:08:38.194956  422371 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:08:38.195732  422371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:38.276488  422371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:08:38.264202445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:38.276771  422371 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 09:08:38.276823  422371 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 09:08:38.277409  422371 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:08:38.280766  422371 out.go:179] * Using Docker driver with root privileges
	I1123 09:08:38.283289  422371 cni.go:84] Creating CNI manager for ""
	I1123 09:08:38.283395  422371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:38.283415  422371 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:08:38.283547  422371 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:38.285004  422371 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:08:38.286897  422371 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:08:38.288156  422371 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:08:38.290670  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:38.290731  422371 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:08:38.290730  422371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:08:38.290745  422371 cache.go:65] Caching tarball of preloaded images
	I1123 09:08:38.290879  422371 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:08:38.290899  422371 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:08:38.291221  422371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:08:38.291304  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json: {Name:mk7c2c302507534cf8c19e4462e0d95cc43f265c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:38.318685  422371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:08:38.318710  422371 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:08:38.318724  422371 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:08:38.318779  422371 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:08:38.318885  422371 start.go:364] duration metric: took 86.746µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:08:38.318916  422371 start.go:93] Provisioning new machine with config: &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:08:38.319041  422371 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:08:35.978152  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:38.464711  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:39.107107  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:41.606781  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:38.321329  422371 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:08:38.321615  422371 start.go:159] libmachine.API.Create for "newest-cni-531046" (driver="docker")
	I1123 09:08:38.321673  422371 client.go:173] LocalClient.Create starting
	I1123 09:08:38.321773  422371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem
	I1123 09:08:38.321807  422371 main.go:143] libmachine: Decoding PEM data...
	I1123 09:08:38.321832  422371 main.go:143] libmachine: Parsing certificate...
	I1123 09:08:38.321892  422371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem
	I1123 09:08:38.321919  422371 main.go:143] libmachine: Decoding PEM data...
	I1123 09:08:38.321937  422371 main.go:143] libmachine: Parsing certificate...
	I1123 09:08:38.322379  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:08:38.344902  422371 cli_runner.go:211] docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:08:38.345023  422371 network_create.go:284] running [docker network inspect newest-cni-531046] to gather additional debugging logs...
	I1123 09:08:38.345053  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046
	W1123 09:08:38.367879  422371 cli_runner.go:211] docker network inspect newest-cni-531046 returned with exit code 1
	I1123 09:08:38.367919  422371 network_create.go:287] error running [docker network inspect newest-cni-531046]: docker network inspect newest-cni-531046: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-531046 not found
	I1123 09:08:38.367936  422371 network_create.go:289] output of [docker network inspect newest-cni-531046]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-531046 not found
	
	** /stderr **
	I1123 09:08:38.368061  422371 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:38.393249  422371 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f35ea3fda0f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:67:c4:67:42:d0} reservation:<nil>}
	I1123 09:08:38.394053  422371 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b5718ee288aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:cf:46:ea:6c:f7} reservation:<nil>}
	I1123 09:08:38.394911  422371 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7539aab81c9c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:4a:40:12:17:c0} reservation:<nil>}
	I1123 09:08:38.395851  422371 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000364fa0}
	I1123 09:08:38.395895  422371 network_create.go:124] attempt to create docker network newest-cni-531046 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 09:08:38.395992  422371 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-531046 newest-cni-531046
	I1123 09:08:38.468495  422371 network_create.go:108] docker network newest-cni-531046 192.168.76.0/24 created
	I1123 09:08:38.468547  422371 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-531046" container
	I1123 09:08:38.468621  422371 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:08:38.493023  422371 cli_runner.go:164] Run: docker volume create newest-cni-531046 --label name.minikube.sigs.k8s.io=newest-cni-531046 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:08:38.517144  422371 oci.go:103] Successfully created a docker volume newest-cni-531046
	I1123 09:08:38.517276  422371 cli_runner.go:164] Run: docker run --rm --name newest-cni-531046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --entrypoint /usr/bin/test -v newest-cni-531046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:08:39.766029  422371 cli_runner.go:217] Completed: docker run --rm --name newest-cni-531046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --entrypoint /usr/bin/test -v newest-cni-531046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.248658868s)
	I1123 09:08:39.766066  422371 oci.go:107] Successfully prepared a docker volume newest-cni-531046
	I1123 09:08:39.766102  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:39.766113  422371 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:08:39.766178  422371 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-531046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 09:08:40.961999  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:43.152956  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	I1123 09:08:44.168223  422371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-531046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.401997577s)
	I1123 09:08:44.168259  422371 kic.go:203] duration metric: took 4.402142717s to extract preloaded images to volume ...
	W1123 09:08:44.168355  422371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:08:44.168395  422371 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:08:44.168452  422371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:08:44.227151  422371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-531046 --name newest-cni-531046 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-531046 --network newest-cni-531046 --ip 192.168.76.2 --volume newest-cni-531046:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:08:44.537884  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Running}}
	I1123 09:08:44.557705  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.577764  422371 cli_runner.go:164] Run: docker exec newest-cni-531046 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:08:44.624704  422371 oci.go:144] the created container "newest-cni-531046" has a running status.
	I1123 09:08:44.624733  422371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa...
	I1123 09:08:44.736260  422371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:08:44.766667  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.790662  422371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:08:44.790697  422371 kic_runner.go:114] Args: [docker exec --privileged newest-cni-531046 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:08:44.838987  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.862887  422371 machine.go:94] provisionDockerMachine start ...
	I1123 09:08:44.863033  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:44.883292  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:44.883587  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:44.883606  422371 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:08:45.031996  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:08:45.032028  422371 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:08:45.032102  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.051178  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.051497  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.051524  422371 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:08:45.208664  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:08:45.208761  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.227777  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.228018  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.228039  422371 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:08:45.371606  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:08:45.371632  422371 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:08:45.371651  422371 ubuntu.go:190] setting up certificates
	I1123 09:08:45.371662  422371 provision.go:84] configureAuth start
	I1123 09:08:45.371721  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:45.389774  422371 provision.go:143] copyHostCerts
	I1123 09:08:45.389830  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:08:45.389843  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:08:45.389919  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:08:45.390046  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:08:45.390057  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:08:45.390089  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:08:45.390147  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:08:45.390155  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:08:45.390179  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:08:45.390230  422371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:08:45.541072  422371 provision.go:177] copyRemoteCerts
	I1123 09:08:45.541133  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:08:45.541174  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.562117  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:45.667284  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:08:45.686630  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:08:45.703786  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:08:45.722197  422371 provision.go:87] duration metric: took 350.521493ms to configureAuth
	I1123 09:08:45.722225  422371 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:08:45.722396  422371 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:45.722498  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.742391  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.742648  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.742671  422371 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:08:46.039742  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:08:46.039768  422371 machine.go:97] duration metric: took 1.176833241s to provisionDockerMachine
	I1123 09:08:46.039779  422371 client.go:176] duration metric: took 7.718098891s to LocalClient.Create
	I1123 09:08:46.039798  422371 start.go:167] duration metric: took 7.718185893s to libmachine.API.Create "newest-cni-531046"
	I1123 09:08:46.039814  422371 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:08:46.039831  422371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:08:46.039890  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:08:46.039953  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.058468  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.161505  422371 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:08:46.164981  422371 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:08:46.165015  422371 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:08:46.165036  422371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:08:46.165097  422371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:08:46.165191  422371 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:08:46.165314  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:08:46.172750  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:46.192188  422371 start.go:296] duration metric: took 152.355864ms for postStartSetup
	I1123 09:08:46.192503  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:46.210543  422371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:08:46.210794  422371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:08:46.210839  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.227599  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.326028  422371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:08:46.330652  422371 start.go:128] duration metric: took 8.011592804s to createHost
	I1123 09:08:46.330683  422371 start.go:83] releasing machines lock for "newest-cni-531046", held for 8.011781957s
	I1123 09:08:46.330787  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:46.349571  422371 ssh_runner.go:195] Run: cat /version.json
	I1123 09:08:46.349646  422371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:08:46.349654  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.349732  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.369439  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.369528  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.522442  422371 ssh_runner.go:195] Run: systemctl --version
	I1123 09:08:46.528993  422371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:08:46.564054  422371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:08:46.569001  422371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:08:46.569074  422371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:08:46.595210  422371 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:08:46.595236  422371 start.go:496] detecting cgroup driver to use...
	I1123 09:08:46.595269  422371 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:08:46.595320  422371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:08:46.613403  422371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:08:46.626130  422371 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:08:46.626178  422371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:08:46.642775  422371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:08:46.659791  422371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:08:46.745157  422371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:08:46.834362  422371 docker.go:234] disabling docker service ...
	I1123 09:08:46.834431  422371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:08:46.852811  422371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:08:46.865931  422371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:08:46.951051  422371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:08:47.039859  422371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:08:47.052884  422371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:08:47.067115  422371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:08:47.067181  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.077039  422371 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:08:47.077101  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.086161  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.094941  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.103813  422371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:08:47.112463  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.121360  422371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.135303  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.144129  422371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:08:47.151329  422371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:08:47.158513  422371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:47.240112  422371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:08:47.375544  422371 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:08:47.375611  422371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:08:47.379548  422371 start.go:564] Will wait 60s for crictl version
	I1123 09:08:47.379618  422371 ssh_runner.go:195] Run: which crictl
	I1123 09:08:47.383442  422371 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:08:47.410017  422371 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:08:47.410104  422371 ssh_runner.go:195] Run: crio --version
	I1123 09:08:47.437993  422371 ssh_runner.go:195] Run: crio --version
	I1123 09:08:47.468147  422371 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:08:47.469409  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:47.488435  422371 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:08:47.492623  422371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:47.504612  422371 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1123 09:08:43.724599  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:46.105841  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:47.505728  422371 kubeadm.go:884] updating cluster {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:08:47.505853  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:47.505903  422371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:47.538254  422371 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:47.538286  422371 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:08:47.538352  422371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:47.565164  422371 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:47.565187  422371 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:08:47.565194  422371 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:08:47.565289  422371 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-531046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:08:47.565368  422371 ssh_runner.go:195] Run: crio config
	I1123 09:08:47.612790  422371 cni.go:84] Creating CNI manager for ""
	I1123 09:08:47.612810  422371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:47.612827  422371 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 09:08:47.612854  422371 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-531046 NodeName:newest-cni-531046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:08:47.613060  422371 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-531046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:08:47.613154  422371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:08:47.621671  422371 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:08:47.621729  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:08:47.630324  422371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:08:47.644458  422371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:08:47.660152  422371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 09:08:47.673398  422371 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:08:47.677342  422371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:47.687438  422371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:47.763532  422371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:47.791488  422371 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046 for IP: 192.168.76.2
	I1123 09:08:47.791510  422371 certs.go:195] generating shared ca certs ...
	I1123 09:08:47.791526  422371 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:47.791688  422371 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:08:47.791739  422371 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:08:47.791753  422371 certs.go:257] generating profile certs ...
	I1123 09:08:47.791817  422371 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key
	I1123 09:08:47.791838  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt with IP's: []
	I1123 09:08:48.032392  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt ...
	I1123 09:08:48.032421  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt: {Name:mk976144a784e1f402ce91ac1356851c2af8ab52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.032598  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key ...
	I1123 09:08:48.032609  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key: {Name:mk8689b72f501cc91be234b56f833c373d45d735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.032703  422371 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be
	I1123 09:08:48.032718  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 09:08:48.154375  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be ...
	I1123 09:08:48.154406  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be: {Name:mk85fa4339f770b1cc1a8ab21bd48c1535d0f2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.154593  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be ...
	I1123 09:08:48.154615  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be: {Name:mkb7dccd9dbd4c24a7085c85e649fe0ef0b2bed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.154724  422371 certs.go:382] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt
	I1123 09:08:48.154801  422371 certs.go:386] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key
	I1123 09:08:48.154856  422371 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key
	I1123 09:08:48.154871  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt with IP's: []
	I1123 09:08:48.278947  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt ...
	I1123 09:08:48.278983  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt: {Name:mk7a585a20c8ee02e9d23266d3061e7bc61a2b9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.279150  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key ...
	I1123 09:08:48.279164  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key: {Name:mka0e35c957b541ffc74ca4dd08e09a485deaafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.279336  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:08:48.279376  422371 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:08:48.279387  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:08:48.279410  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:08:48.279433  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:08:48.279455  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:08:48.279508  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:48.280084  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:08:48.299345  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:08:48.317116  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:08:48.334058  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:08:48.350946  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:08:48.368229  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:08:48.385613  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:08:48.402695  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:08:48.421116  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:08:48.440103  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:08:48.458407  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:08:48.476020  422371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:08:48.488750  422371 ssh_runner.go:195] Run: openssl version
	I1123 09:08:48.494927  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:08:48.503050  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.506775  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.506822  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.543113  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:08:48.552172  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:08:48.560784  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.564768  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.564828  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.608294  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:08:48.617212  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:08:48.626532  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.630942  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.631085  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.666782  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:08:48.677518  422371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:08:48.681151  422371 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:08:48.681214  422371 kubeadm.go:401] StartCluster: {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:48.681302  422371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:08:48.681360  422371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:08:48.707656  422371 cri.go:89] found id: ""
	I1123 09:08:48.707721  422371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:08:48.715885  422371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:08:48.724069  422371 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:08:48.724125  422371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:08:48.731960  422371 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:08:48.731995  422371 kubeadm.go:158] found existing configuration files:
	
	I1123 09:08:48.732033  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:08:48.740868  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:08:48.740935  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:08:48.749362  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:08:48.757073  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:08:48.757137  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:08:48.764375  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:08:48.772291  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:08:48.772337  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:08:48.779794  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:08:48.788802  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:08:48.788876  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:08:48.796307  422371 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:08:48.834042  422371 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:08:48.834787  422371 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:08:48.853738  422371 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:08:48.853845  422371 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 09:08:48.853911  422371 kubeadm.go:319] OS: Linux
	I1123 09:08:48.853979  422371 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:08:48.854052  422371 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:08:48.854114  422371 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:08:48.854191  422371 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:08:48.854267  422371 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:08:48.854347  422371 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:08:48.854431  422371 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:08:48.854474  422371 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 09:08:48.912767  422371 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:08:48.912928  422371 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:08:48.913088  422371 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:08:48.923434  422371 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1123 09:08:45.461624  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:47.461833  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.04442582Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.04799361Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.048016723Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.19407123Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f516b2f8-58c7-4c56-ba9d-bf4a1476d39f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.197095646Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=525786ee-e0ad-433e-a799-da7e8e51e650 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.200244923Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=ced9e8d6-5d86-4a21-a79b-4ab1d9fc845b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.200381572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.210164479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.210721804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.234509806Z" level=info msg="Created container 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=ced9e8d6-5d86-4a21-a79b-4ab1d9fc845b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.235213677Z" level=info msg="Starting container: 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151" id=4775bc7e-53eb-4cce-abae-eb9304187ed3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.236912267Z" level=info msg="Started container" PID=1764 containerID=133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper id=4775bc7e-53eb-4cce-abae-eb9304187ed3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a836d85f778a95d1feea58357164c5b223e2efaf6ab192de666e1a9cdc19f23
	Nov 23 09:08:09 no-preload-619589 crio[572]: time="2025-11-23T09:08:09.199578908Z" level=info msg="Removing container: bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824" id=8b25b206-2c6e-49e5-a8a0-86dc94991145 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:09 no-preload-619589 crio[572]: time="2025-11-23T09:08:09.21132694Z" level=info msg="Removed container bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=8b25b206-2c6e-49e5-a8a0-86dc94991145 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.127925963Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=580b103f-9766-453f-a537-ff32e5ebdcd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.129009803Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e23eea18-774c-40ea-9ac0-781bb157918d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.130285985Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=537c22a0-78f1-418f-8df0-9c9e96eddbff name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.130424091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.137734323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.138409867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.175996718Z" level=info msg="Created container 3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=537c22a0-78f1-418f-8df0-9c9e96eddbff name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.176645815Z" level=info msg="Starting container: 3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6" id=2cff9469-89de-4344-8963-286871d710f0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.180675818Z" level=info msg="Started container" PID=1774 containerID=3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper id=2cff9469-89de-4344-8963-286871d710f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a836d85f778a95d1feea58357164c5b223e2efaf6ab192de666e1a9cdc19f23
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.2498808Z" level=info msg="Removing container: 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151" id=c6b998b3-f48a-47e0-a541-66a9c5ee7beb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.263130957Z" level=info msg="Removed container 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=c6b998b3-f48a-47e0-a541-66a9c5ee7beb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3400c7d3fe5a0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   7a836d85f778a       dashboard-metrics-scraper-6ffb444bf9-9lfkm   kubernetes-dashboard
	c7be853bc6291       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   c9470f9844bdd       kubernetes-dashboard-855c9754f9-d5gfp        kubernetes-dashboard
	23f1cd486ac33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   a1aa4008de3fa       storage-provisioner                          kube-system
	8c900a51b3205       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   fc0caab9d915a       busybox                                      default
	7941decb9aa0e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   5d13ea70f2c74       coredns-66bc5c9577-dhxwz                     kube-system
	b6302a52b3f07       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   e71db4128b4e7       kube-proxy-qbkwc                             kube-system
	f2f44d09fd70f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   a1aa4008de3fa       storage-provisioner                          kube-system
	14ae20126a459       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   f18e13af11493       kindnet-dp6kh                                kube-system
	a3bc253f74d93       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   28c77fa45a339       kube-scheduler-no-preload-619589             kube-system
	6ac3ed6ad22f9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   e447d40f06734       kube-controller-manager-no-preload-619589    kube-system
	9b89533199bb2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   e98ba9a0fbb61       kube-apiserver-no-preload-619589             kube-system
	1f60fb31039bd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   e00e4b17ecb3f       etcd-no-preload-619589                       kube-system
	
	
	==> coredns [7941decb9aa0e2ebe4de99bf8450f512bdbb39ec2aa3306ef5a8400615d2d659] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50987 - 41158 "HINFO IN 8175534550062909321.8811228830837318431. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0612273s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-619589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-619589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-619589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_06_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:06:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-619589
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:08:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:07:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-619589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3483a19d-ff48-49f2-b35e-7cee468a4ef8
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-dhxwz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-no-preload-619589                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-dp6kh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-no-preload-619589              250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-no-preload-619589     200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-qbkwc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-no-preload-619589              100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9lfkm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d5gfp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node no-preload-619589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node no-preload-619589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node no-preload-619589 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s               node-controller  Node no-preload-619589 event: Registered Node no-preload-619589 in Controller
	  Normal  NodeReady                96s                kubelet          Node no-preload-619589 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node no-preload-619589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node no-preload-619589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node no-preload-619589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node no-preload-619589 event: Registered Node no-preload-619589 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [1f60fb31039bdce86058df87c7da04ea74adbafc6e245568fb6ab0413a0af065] <==
	{"level":"warn","ts":"2025-11-23T09:07:55.871029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.881026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.889780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.897874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.909844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.920275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.929794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.938117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.946632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.960342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.971247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.976072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.987487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.992957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.001743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.008786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.017091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.024717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.033832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.041646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.060041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.068473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.076672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.145987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:08:43.852658Z","caller":"traceutil/trace.go:172","msg":"trace[2118257107] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"102.26408ms","start":"2025-11-23T09:08:43.750374Z","end":"2025-11-23T09:08:43.852638Z","steps":["trace[2118257107] 'process raft request'  (duration: 102.135819ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:08:52 up  1:51,  0 user,  load average: 6.03, 4.65, 2.93
	Linux no-preload-619589 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [14ae20126a459a7fdf582ec5a271d47e3dca1e142c4ebf9e0350dd559be93573] <==
	I1123 09:07:57.727633       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:07:57.727929       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:07:57.728200       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:07:57.728275       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:07:57.728307       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:07:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:07:58.028312       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:07:58.126585       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:07:58.126622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:07:58.126848       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:07:58.527687       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:07:58.527739       1 metrics.go:72] Registering metrics
	I1123 09:07:58.527838       1 controller.go:711] "Syncing nftables rules"
	I1123 09:08:08.028685       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:08.028746       1 main.go:301] handling current node
	I1123 09:08:18.032054       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:18.032110       1 main.go:301] handling current node
	I1123 09:08:28.028941       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:28.029037       1 main.go:301] handling current node
	I1123 09:08:38.029246       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:38.029285       1 main.go:301] handling current node
	I1123 09:08:48.036905       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:48.036945       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b89533199bb2186454a2491d3cdd6e0a13a98d889f1739695a869ff190a6ad7] <==
	I1123 09:07:56.758860       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:07:56.758911       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:07:56.758926       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:07:56.758932       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:07:56.758937       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:07:56.760365       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:07:56.760740       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:07:56.760774       1 policy_source.go:240] refreshing policies
	I1123 09:07:56.760824       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:07:56.760864       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:07:56.767048       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 09:07:56.770448       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:07:56.810805       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:07:56.812221       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:57.118717       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:07:57.126922       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:07:57.195364       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:07:57.228941       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:07:57.244210       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:07:57.307508       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.204.243"}
	I1123 09:07:57.322298       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.16.228"}
	I1123 09:07:57.659324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:08:00.120675       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:08:00.471660       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:00.668559       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6ac3ed6ad22f96a5e8a6803a48c463751843af2805ec1400ba36fedc144cf1d9] <==
	I1123 09:08:00.114576       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:08:00.115743       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:08:00.115783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:08:00.115787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:08:00.115853       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:08:00.116102       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:08:00.116809       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:08:00.117672       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:08:00.117761       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:08:00.117775       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:08:00.117838       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:08:00.117896       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:08:00.118047       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-619589"
	I1123 09:08:00.118118       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:08:00.118149       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:08:00.118255       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:08:00.119866       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:08:00.120016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:08:00.120022       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:08:00.120147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:08:00.121236       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:00.122376       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:08:00.125657       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:08:00.132939       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:08:00.142263       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b6302a52b3f07d041a82fd1384e80a24771f24468d8b556d23d35c13521bfcd3] <==
	I1123 09:07:57.546548       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:07:57.632666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:07:57.733760       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:07:57.733801       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:07:57.733908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:07:57.756900       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:57.757021       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:07:57.764811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:07:57.765240       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:07:57.765280       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:57.766581       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:07:57.766593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:07:57.766995       1 config.go:200] "Starting service config controller"
	I1123 09:07:57.767006       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:07:57.767148       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:07:57.767156       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:07:57.767479       1 config.go:309] "Starting node config controller"
	I1123 09:07:57.767499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:07:57.867598       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:07:57.867613       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:07:57.867643       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:07:57.867648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a3bc253f74d935c63450cd3db07c274df85d3f1746da99b79e94bf15141d4c16] <==
	I1123 09:07:56.744292       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:07:57.394028       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:07:57.394058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:57.399530       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:07:57.399564       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:07:57.399559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:57.399569       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:07:57.399585       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:57.399590       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:07:57.400053       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:07:57.400420       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:07:57.499729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:57.499837       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:07:57.500743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:08:00 no-preload-619589 kubelet[722]: I1123 09:08:00.640694     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r64z\" (UniqueName: \"kubernetes.io/projected/712cadaa-769d-4ff2-a7d3-2d9a8a8bf56e-kube-api-access-9r64z\") pod \"kubernetes-dashboard-855c9754f9-d5gfp\" (UID: \"712cadaa-769d-4ff2-a7d3-2d9a8a8bf56e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d5gfp"
	Nov 23 09:08:00 no-preload-619589 kubelet[722]: I1123 09:08:00.640740     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqfvn\" (UniqueName: \"kubernetes.io/projected/618cb4b2-a55d-4d4b-b08e-59836433f857-kube-api-access-tqfvn\") pod \"dashboard-metrics-scraper-6ffb444bf9-9lfkm\" (UID: \"618cb4b2-a55d-4d4b-b08e-59836433f857\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm"
	Nov 23 09:08:05 no-preload-619589 kubelet[722]: I1123 09:08:05.216598     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d5gfp" podStartSLOduration=1.17902052 podStartE2EDuration="5.216575334s" podCreationTimestamp="2025-11-23 09:08:00 +0000 UTC" firstStartedPulling="2025-11-23 09:08:00.873275635 +0000 UTC m=+6.844817125" lastFinishedPulling="2025-11-23 09:08:04.910830447 +0000 UTC m=+10.882371939" observedRunningTime="2025-11-23 09:08:05.215914193 +0000 UTC m=+11.187455705" watchObservedRunningTime="2025-11-23 09:08:05.216575334 +0000 UTC m=+11.188116850"
	Nov 23 09:08:05 no-preload-619589 kubelet[722]: I1123 09:08:05.799251     722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 09:08:08 no-preload-619589 kubelet[722]: I1123 09:08:08.193634     722 scope.go:117] "RemoveContainer" containerID="bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824"
	Nov 23 09:08:09 no-preload-619589 kubelet[722]: I1123 09:08:09.198195     722 scope.go:117] "RemoveContainer" containerID="bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824"
	Nov 23 09:08:09 no-preload-619589 kubelet[722]: I1123 09:08:09.198315     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:09 no-preload-619589 kubelet[722]: E1123 09:08:09.198505     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:10 no-preload-619589 kubelet[722]: I1123 09:08:10.202495     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:10 no-preload-619589 kubelet[722]: E1123 09:08:10.202665     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:12 no-preload-619589 kubelet[722]: I1123 09:08:12.676875     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:12 no-preload-619589 kubelet[722]: E1123 09:08:12.677121     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: I1123 09:08:27.127361     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: I1123 09:08:27.248448     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: I1123 09:08:27.248805     722 scope.go:117] "RemoveContainer" containerID="3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: E1123 09:08:27.249536     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:32 no-preload-619589 kubelet[722]: I1123 09:08:32.676891     722 scope.go:117] "RemoveContainer" containerID="3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	Nov 23 09:08:32 no-preload-619589 kubelet[722]: E1123 09:08:32.677158     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:45 no-preload-619589 kubelet[722]: I1123 09:08:45.126666     722 scope.go:117] "RemoveContainer" containerID="3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	Nov 23 09:08:45 no-preload-619589 kubelet[722]: E1123 09:08:45.126829     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:49 no-preload-619589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:08:49 no-preload-619589 kubelet[722]: I1123 09:08:49.667506     722 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 09:08:49 no-preload-619589 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:08:49 no-preload-619589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:08:49 no-preload-619589 systemd[1]: kubelet.service: Consumed 1.750s CPU time.
	
	
	==> kubernetes-dashboard [c7be853bc6291068babb574a6fed0026a725056d23096bb61e1d6ffc9a4a6fa1] <==
	2025/11/23 09:08:04 Starting overwatch
	2025/11/23 09:08:04 Using namespace: kubernetes-dashboard
	2025/11/23 09:08:04 Using in-cluster config to connect to apiserver
	2025/11/23 09:08:04 Using secret token for csrf signing
	2025/11/23 09:08:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:08:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:08:04 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:08:04 Generating JWE encryption key
	2025/11/23 09:08:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:08:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:08:05 Initializing JWE encryption key from synchronized object
	2025/11/23 09:08:05 Creating in-cluster Sidecar client
	2025/11/23 09:08:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:05 Serving insecurely on HTTP port: 9090
	2025/11/23 09:08:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [23f1cd486ac33874f452790b2608401eddaa4a3bd8f96430c807fcfb5e1937b0] <==
	W1123 09:08:27.677813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:29.681822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:29.687086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:31.691204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:31.696769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:33.700831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:33.705456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:35.708852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:35.714113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:37.718097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:37.724282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:39.728191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:39.733550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:41.736794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:41.745302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:43.748368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:43.853786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:45.856605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:45.861599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:47.864851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:47.868791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:49.872031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:49.876944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:51.879869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:51.884391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f2f44d09fd70f07bb62953d4d3a45b5459cdef60ec014cf96fd80a6ed19a134b] <==
	I1123 09:07:57.508626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:07:57.513077       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-619589 -n no-preload-619589
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-619589 -n no-preload-619589: exit status 2 (360.759022ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-619589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-619589
helpers_test.go:243: (dbg) docker inspect no-preload-619589:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328",
	        "Created": "2025-11-23T09:06:25.102316496Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 410146,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:07:47.679820701Z",
	            "FinishedAt": "2025-11-23T09:07:46.740204172Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/hostname",
	        "HostsPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/hosts",
	        "LogPath": "/var/lib/docker/containers/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328/75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328-json.log",
	        "Name": "/no-preload-619589",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-619589:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-619589",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "75a1703935537a599d4d63d8a1fcad8aee76b131885db1cc71f51b6fe5b40328",
	                "LowerDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/merged",
	                "UpperDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/diff",
	                "WorkDir": "/var/lib/docker/overlay2/5661dec26e35ce89a08317de680c51d7eb44a4cd287120651431aafb742f75ce/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-619589",
	                "Source": "/var/lib/docker/volumes/no-preload-619589/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-619589",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-619589",
	                "name.minikube.sigs.k8s.io": "no-preload-619589",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "50158894eec67c15e3f74a020c71aa5ecece1c5b8566b6f9dc3866697ddc936e",
	            "SandboxKey": "/var/run/docker/netns/50158894eec6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-619589": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "40c67f27f7925004fe92866c39e8b5aa93f9532071ca8f095a0bf7fb3ffde5bf",
	                    "EndpointID": "54652c8b7de63fe9959a1dd3d45b5a50d54525a366635e51c8e478877b2439c3",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "b2:32:65:27:5f:e5",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-619589",
	                        "75a170393553"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589: exit status 2 (358.976469ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-619589 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-619589 logs -n 25: (1.217614485s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-740936                                                                                                                                                                                                               │ disable-driver-mounts-740936 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-054094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p no-preload-619589 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:08:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:08:38.063057  422371 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:08:38.063185  422371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:38.063194  422371 out.go:374] Setting ErrFile to fd 2...
	I1123 09:08:38.063199  422371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:38.063491  422371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:08:38.064118  422371 out.go:368] Setting JSON to false
	I1123 09:08:38.065952  422371 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6658,"bootTime":1763882260,"procs":454,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:08:38.066040  422371 start.go:143] virtualization: kvm guest
	I1123 09:08:38.068178  422371 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:08:38.069546  422371 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:08:38.069540  422371 notify.go:221] Checking for updates...
	I1123 09:08:38.071773  422371 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:08:38.073033  422371 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:38.078218  422371 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:08:38.079577  422371 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:08:38.080792  422371 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:08:38.082709  422371 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.082880  422371 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.083058  422371 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.083206  422371 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:08:38.112463  422371 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:08:38.112578  422371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:38.187928  422371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:08:38.174595805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:38.188138  422371 docker.go:319] overlay module found
	I1123 09:08:38.190297  422371 out.go:179] * Using the docker driver based on user configuration
	I1123 09:08:38.194907  422371 start.go:309] selected driver: docker
	I1123 09:08:38.194937  422371 start.go:927] validating driver "docker" against <nil>
	I1123 09:08:38.194956  422371 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:08:38.195732  422371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:38.276488  422371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:08:38.264202445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:38.276771  422371 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 09:08:38.276823  422371 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 09:08:38.277409  422371 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:08:38.280766  422371 out.go:179] * Using Docker driver with root privileges
	I1123 09:08:38.283289  422371 cni.go:84] Creating CNI manager for ""
	I1123 09:08:38.283395  422371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:38.283415  422371 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:08:38.283547  422371 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:38.285004  422371 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:08:38.286897  422371 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:08:38.288156  422371 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:08:38.290670  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:38.290731  422371 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:08:38.290730  422371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:08:38.290745  422371 cache.go:65] Caching tarball of preloaded images
	I1123 09:08:38.290879  422371 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:08:38.290899  422371 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:08:38.291221  422371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:08:38.291304  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json: {Name:mk7c2c302507534cf8c19e4462e0d95cc43f265c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:38.318685  422371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:08:38.318710  422371 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:08:38.318724  422371 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:08:38.318779  422371 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:08:38.318885  422371 start.go:364] duration metric: took 86.746µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:08:38.318916  422371 start.go:93] Provisioning new machine with config: &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:08:38.319041  422371 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:08:35.978152  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:38.464711  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:39.107107  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:41.606781  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:38.321329  422371 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:08:38.321615  422371 start.go:159] libmachine.API.Create for "newest-cni-531046" (driver="docker")
	I1123 09:08:38.321673  422371 client.go:173] LocalClient.Create starting
	I1123 09:08:38.321773  422371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem
	I1123 09:08:38.321807  422371 main.go:143] libmachine: Decoding PEM data...
	I1123 09:08:38.321832  422371 main.go:143] libmachine: Parsing certificate...
	I1123 09:08:38.321892  422371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem
	I1123 09:08:38.321919  422371 main.go:143] libmachine: Decoding PEM data...
	I1123 09:08:38.321937  422371 main.go:143] libmachine: Parsing certificate...
	I1123 09:08:38.322379  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:08:38.344902  422371 cli_runner.go:211] docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:08:38.345023  422371 network_create.go:284] running [docker network inspect newest-cni-531046] to gather additional debugging logs...
	I1123 09:08:38.345053  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046
	W1123 09:08:38.367879  422371 cli_runner.go:211] docker network inspect newest-cni-531046 returned with exit code 1
	I1123 09:08:38.367919  422371 network_create.go:287] error running [docker network inspect newest-cni-531046]: docker network inspect newest-cni-531046: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-531046 not found
	I1123 09:08:38.367936  422371 network_create.go:289] output of [docker network inspect newest-cni-531046]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-531046 not found
	
	** /stderr **
	I1123 09:08:38.368061  422371 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:38.393249  422371 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f35ea3fda0f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:67:c4:67:42:d0} reservation:<nil>}
	I1123 09:08:38.394053  422371 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b5718ee288aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:cf:46:ea:6c:f7} reservation:<nil>}
	I1123 09:08:38.394911  422371 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7539aab81c9c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:4a:40:12:17:c0} reservation:<nil>}
	I1123 09:08:38.395851  422371 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000364fa0}
	I1123 09:08:38.395895  422371 network_create.go:124] attempt to create docker network newest-cni-531046 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 09:08:38.395992  422371 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-531046 newest-cni-531046
	I1123 09:08:38.468495  422371 network_create.go:108] docker network newest-cni-531046 192.168.76.0/24 created
	I1123 09:08:38.468547  422371 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-531046" container
	I1123 09:08:38.468621  422371 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:08:38.493023  422371 cli_runner.go:164] Run: docker volume create newest-cni-531046 --label name.minikube.sigs.k8s.io=newest-cni-531046 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:08:38.517144  422371 oci.go:103] Successfully created a docker volume newest-cni-531046
	I1123 09:08:38.517276  422371 cli_runner.go:164] Run: docker run --rm --name newest-cni-531046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --entrypoint /usr/bin/test -v newest-cni-531046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:08:39.766029  422371 cli_runner.go:217] Completed: docker run --rm --name newest-cni-531046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --entrypoint /usr/bin/test -v newest-cni-531046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.248658868s)
	I1123 09:08:39.766066  422371 oci.go:107] Successfully prepared a docker volume newest-cni-531046
	I1123 09:08:39.766102  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:39.766113  422371 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:08:39.766178  422371 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-531046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 09:08:40.961999  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:43.152956  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	I1123 09:08:44.168223  422371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-531046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.401997577s)
	I1123 09:08:44.168259  422371 kic.go:203] duration metric: took 4.402142717s to extract preloaded images to volume ...
	W1123 09:08:44.168355  422371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:08:44.168395  422371 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:08:44.168452  422371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:08:44.227151  422371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-531046 --name newest-cni-531046 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-531046 --network newest-cni-531046 --ip 192.168.76.2 --volume newest-cni-531046:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:08:44.537884  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Running}}
	I1123 09:08:44.557705  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.577764  422371 cli_runner.go:164] Run: docker exec newest-cni-531046 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:08:44.624704  422371 oci.go:144] the created container "newest-cni-531046" has a running status.
	I1123 09:08:44.624733  422371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa...
	I1123 09:08:44.736260  422371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:08:44.766667  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.790662  422371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:08:44.790697  422371 kic_runner.go:114] Args: [docker exec --privileged newest-cni-531046 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:08:44.838987  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.862887  422371 machine.go:94] provisionDockerMachine start ...
	I1123 09:08:44.863033  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:44.883292  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:44.883587  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:44.883606  422371 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:08:45.031996  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:08:45.032028  422371 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:08:45.032102  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.051178  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.051497  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.051524  422371 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:08:45.208664  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:08:45.208761  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.227777  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.228018  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.228039  422371 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:08:45.371606  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:08:45.371632  422371 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:08:45.371651  422371 ubuntu.go:190] setting up certificates
	I1123 09:08:45.371662  422371 provision.go:84] configureAuth start
	I1123 09:08:45.371721  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:45.389774  422371 provision.go:143] copyHostCerts
	I1123 09:08:45.389830  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:08:45.389843  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:08:45.389919  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:08:45.390046  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:08:45.390057  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:08:45.390089  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:08:45.390147  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:08:45.390155  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:08:45.390179  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:08:45.390230  422371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:08:45.541072  422371 provision.go:177] copyRemoteCerts
	I1123 09:08:45.541133  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:08:45.541174  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.562117  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:45.667284  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:08:45.686630  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:08:45.703786  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:08:45.722197  422371 provision.go:87] duration metric: took 350.521493ms to configureAuth
	I1123 09:08:45.722225  422371 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:08:45.722396  422371 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:45.722498  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.742391  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.742648  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.742671  422371 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:08:46.039742  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:08:46.039768  422371 machine.go:97] duration metric: took 1.176833241s to provisionDockerMachine
	I1123 09:08:46.039779  422371 client.go:176] duration metric: took 7.718098891s to LocalClient.Create
	I1123 09:08:46.039798  422371 start.go:167] duration metric: took 7.718185893s to libmachine.API.Create "newest-cni-531046"
	I1123 09:08:46.039814  422371 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:08:46.039831  422371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:08:46.039890  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:08:46.039953  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.058468  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.161505  422371 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:08:46.164981  422371 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:08:46.165015  422371 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:08:46.165036  422371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:08:46.165097  422371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:08:46.165191  422371 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:08:46.165314  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:08:46.172750  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:46.192188  422371 start.go:296] duration metric: took 152.355864ms for postStartSetup
	I1123 09:08:46.192503  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:46.210543  422371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:08:46.210794  422371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:08:46.210839  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.227599  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.326028  422371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:08:46.330652  422371 start.go:128] duration metric: took 8.011592804s to createHost
	I1123 09:08:46.330683  422371 start.go:83] releasing machines lock for "newest-cni-531046", held for 8.011781957s
	I1123 09:08:46.330787  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:46.349571  422371 ssh_runner.go:195] Run: cat /version.json
	I1123 09:08:46.349646  422371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:08:46.349654  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.349732  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.369439  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.369528  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.522442  422371 ssh_runner.go:195] Run: systemctl --version
	I1123 09:08:46.528993  422371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:08:46.564054  422371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:08:46.569001  422371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:08:46.569074  422371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:08:46.595210  422371 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:08:46.595236  422371 start.go:496] detecting cgroup driver to use...
	I1123 09:08:46.595269  422371 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:08:46.595320  422371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:08:46.613403  422371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:08:46.626130  422371 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:08:46.626178  422371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:08:46.642775  422371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:08:46.659791  422371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:08:46.745157  422371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:08:46.834362  422371 docker.go:234] disabling docker service ...
	I1123 09:08:46.834431  422371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:08:46.852811  422371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:08:46.865931  422371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:08:46.951051  422371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:08:47.039859  422371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:08:47.052884  422371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:08:47.067115  422371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:08:47.067181  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.077039  422371 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:08:47.077101  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.086161  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.094941  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.103813  422371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:08:47.112463  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.121360  422371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.135303  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.144129  422371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:08:47.151329  422371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:08:47.158513  422371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:47.240112  422371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:08:47.375544  422371 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:08:47.375611  422371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:08:47.379548  422371 start.go:564] Will wait 60s for crictl version
	I1123 09:08:47.379618  422371 ssh_runner.go:195] Run: which crictl
	I1123 09:08:47.383442  422371 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:08:47.410017  422371 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:08:47.410104  422371 ssh_runner.go:195] Run: crio --version
	I1123 09:08:47.437993  422371 ssh_runner.go:195] Run: crio --version
	I1123 09:08:47.468147  422371 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:08:47.469409  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:47.488435  422371 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:08:47.492623  422371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:47.504612  422371 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1123 09:08:43.724599  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:46.105841  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:47.505728  422371 kubeadm.go:884] updating cluster {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:08:47.505853  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:47.505903  422371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:47.538254  422371 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:47.538286  422371 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:08:47.538352  422371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:47.565164  422371 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:47.565187  422371 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:08:47.565194  422371 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:08:47.565289  422371 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-531046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:08:47.565368  422371 ssh_runner.go:195] Run: crio config
	I1123 09:08:47.612790  422371 cni.go:84] Creating CNI manager for ""
	I1123 09:08:47.612810  422371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:47.612827  422371 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 09:08:47.612854  422371 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-531046 NodeName:newest-cni-531046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:08:47.613060  422371 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-531046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:08:47.613154  422371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:08:47.621671  422371 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:08:47.621729  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:08:47.630324  422371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:08:47.644458  422371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:08:47.660152  422371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 09:08:47.673398  422371 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:08:47.677342  422371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:47.687438  422371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:47.763532  422371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:47.791488  422371 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046 for IP: 192.168.76.2
	I1123 09:08:47.791510  422371 certs.go:195] generating shared ca certs ...
	I1123 09:08:47.791526  422371 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:47.791688  422371 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:08:47.791739  422371 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:08:47.791753  422371 certs.go:257] generating profile certs ...
	I1123 09:08:47.791817  422371 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key
	I1123 09:08:47.791838  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt with IP's: []
	I1123 09:08:48.032392  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt ...
	I1123 09:08:48.032421  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt: {Name:mk976144a784e1f402ce91ac1356851c2af8ab52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.032598  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key ...
	I1123 09:08:48.032609  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key: {Name:mk8689b72f501cc91be234b56f833c373d45d735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.032703  422371 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be
	I1123 09:08:48.032718  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 09:08:48.154375  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be ...
	I1123 09:08:48.154406  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be: {Name:mk85fa4339f770b1cc1a8ab21bd48c1535d0f2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.154593  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be ...
	I1123 09:08:48.154615  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be: {Name:mkb7dccd9dbd4c24a7085c85e649fe0ef0b2bed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.154724  422371 certs.go:382] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt
	I1123 09:08:48.154801  422371 certs.go:386] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key
	I1123 09:08:48.154856  422371 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key
	I1123 09:08:48.154871  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt with IP's: []
	I1123 09:08:48.278947  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt ...
	I1123 09:08:48.278983  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt: {Name:mk7a585a20c8ee02e9d23266d3061e7bc61a2b9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.279150  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key ...
	I1123 09:08:48.279164  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key: {Name:mka0e35c957b541ffc74ca4dd08e09a485deaafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.279336  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:08:48.279376  422371 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:08:48.279387  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:08:48.279410  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:08:48.279433  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:08:48.279455  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:08:48.279508  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:48.280084  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:08:48.299345  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:08:48.317116  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:08:48.334058  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:08:48.350946  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:08:48.368229  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:08:48.385613  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:08:48.402695  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:08:48.421116  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:08:48.440103  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:08:48.458407  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:08:48.476020  422371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:08:48.488750  422371 ssh_runner.go:195] Run: openssl version
	I1123 09:08:48.494927  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:08:48.503050  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.506775  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.506822  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.543113  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:08:48.552172  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:08:48.560784  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.564768  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.564828  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.608294  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:08:48.617212  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:08:48.626532  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.630942  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.631085  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.666782  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:08:48.677518  422371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:08:48.681151  422371 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:08:48.681214  422371 kubeadm.go:401] StartCluster: {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:48.681302  422371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:08:48.681360  422371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:08:48.707656  422371 cri.go:89] found id: ""
	I1123 09:08:48.707721  422371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:08:48.715885  422371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:08:48.724069  422371 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:08:48.724125  422371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:08:48.731960  422371 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:08:48.731995  422371 kubeadm.go:158] found existing configuration files:
	
	I1123 09:08:48.732033  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:08:48.740868  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:08:48.740935  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:08:48.749362  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:08:48.757073  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:08:48.757137  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:08:48.764375  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:08:48.772291  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:08:48.772337  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:08:48.779794  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:08:48.788802  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:08:48.788876  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:08:48.796307  422371 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:08:48.834042  422371 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:08:48.834787  422371 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:08:48.853738  422371 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:08:48.853845  422371 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 09:08:48.853911  422371 kubeadm.go:319] OS: Linux
	I1123 09:08:48.853979  422371 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:08:48.854052  422371 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:08:48.854114  422371 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:08:48.854191  422371 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:08:48.854267  422371 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:08:48.854347  422371 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:08:48.854431  422371 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:08:48.854474  422371 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 09:08:48.912767  422371 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:08:48.912928  422371 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:08:48.913088  422371 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:08:48.923434  422371 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1123 09:08:45.461624  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:47.461833  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:48.106628  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:50.605768  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:52.608742  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:48.925558  422371 out.go:252]   - Generating certificates and keys ...
	I1123 09:08:48.925678  422371 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:08:48.925804  422371 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:08:49.034803  422371 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:08:49.289450  422371 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:08:49.448806  422371 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:08:49.726714  422371 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:08:49.777203  422371 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:08:49.777360  422371 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-531046] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:08:50.033016  422371 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:08:50.033189  422371 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-531046] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:08:50.554580  422371 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:08:50.798929  422371 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:08:51.130547  422371 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:08:51.130731  422371 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:08:51.382963  422371 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:08:51.876494  422371 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:08:52.259170  422371 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:08:52.556836  422371 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:08:52.743709  422371 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:08:52.744448  422371 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:08:52.748107  422371 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:08:52.749366  422371 out.go:252]   - Booting up control plane ...
	I1123 09:08:52.749494  422371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:08:52.749594  422371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:08:52.750313  422371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:08:52.764609  422371 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:08:52.764769  422371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:08:52.771268  422371 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:08:52.772835  422371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:08:52.772922  422371 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:08:52.899476  422371 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:08:52.899645  422371 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	
	
	==> CRI-O <==
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.04442582Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.04799361Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.048016723Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.19407123Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f516b2f8-58c7-4c56-ba9d-bf4a1476d39f name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.197095646Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=525786ee-e0ad-433e-a799-da7e8e51e650 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.200244923Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=ced9e8d6-5d86-4a21-a79b-4ab1d9fc845b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.200381572Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.210164479Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.210721804Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.234509806Z" level=info msg="Created container 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=ced9e8d6-5d86-4a21-a79b-4ab1d9fc845b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.235213677Z" level=info msg="Starting container: 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151" id=4775bc7e-53eb-4cce-abae-eb9304187ed3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:08 no-preload-619589 crio[572]: time="2025-11-23T09:08:08.236912267Z" level=info msg="Started container" PID=1764 containerID=133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper id=4775bc7e-53eb-4cce-abae-eb9304187ed3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a836d85f778a95d1feea58357164c5b223e2efaf6ab192de666e1a9cdc19f23
	Nov 23 09:08:09 no-preload-619589 crio[572]: time="2025-11-23T09:08:09.199578908Z" level=info msg="Removing container: bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824" id=8b25b206-2c6e-49e5-a8a0-86dc94991145 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:09 no-preload-619589 crio[572]: time="2025-11-23T09:08:09.21132694Z" level=info msg="Removed container bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=8b25b206-2c6e-49e5-a8a0-86dc94991145 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.127925963Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=580b103f-9766-453f-a537-ff32e5ebdcd3 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.129009803Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=e23eea18-774c-40ea-9ac0-781bb157918d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.130285985Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=537c22a0-78f1-418f-8df0-9c9e96eddbff name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.130424091Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.137734323Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.138409867Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.175996718Z" level=info msg="Created container 3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=537c22a0-78f1-418f-8df0-9c9e96eddbff name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.176645815Z" level=info msg="Starting container: 3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6" id=2cff9469-89de-4344-8963-286871d710f0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.180675818Z" level=info msg="Started container" PID=1774 containerID=3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper id=2cff9469-89de-4344-8963-286871d710f0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7a836d85f778a95d1feea58357164c5b223e2efaf6ab192de666e1a9cdc19f23
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.2498808Z" level=info msg="Removing container: 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151" id=c6b998b3-f48a-47e0-a541-66a9c5ee7beb name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:27 no-preload-619589 crio[572]: time="2025-11-23T09:08:27.263130957Z" level=info msg="Removed container 133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm/dashboard-metrics-scraper" id=c6b998b3-f48a-47e0-a541-66a9c5ee7beb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	3400c7d3fe5a0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   7a836d85f778a       dashboard-metrics-scraper-6ffb444bf9-9lfkm   kubernetes-dashboard
	c7be853bc6291       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago      Running             kubernetes-dashboard        0                   c9470f9844bdd       kubernetes-dashboard-855c9754f9-d5gfp        kubernetes-dashboard
	23f1cd486ac33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Running             storage-provisioner         1                   a1aa4008de3fa       storage-provisioner                          kube-system
	8c900a51b3205       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   fc0caab9d915a       busybox                                      default
	7941decb9aa0e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   5d13ea70f2c74       coredns-66bc5c9577-dhxwz                     kube-system
	b6302a52b3f07       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   e71db4128b4e7       kube-proxy-qbkwc                             kube-system
	f2f44d09fd70f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   a1aa4008de3fa       storage-provisioner                          kube-system
	14ae20126a459       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   f18e13af11493       kindnet-dp6kh                                kube-system
	a3bc253f74d93       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   28c77fa45a339       kube-scheduler-no-preload-619589             kube-system
	6ac3ed6ad22f9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   e447d40f06734       kube-controller-manager-no-preload-619589    kube-system
	9b89533199bb2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   e98ba9a0fbb61       kube-apiserver-no-preload-619589             kube-system
	1f60fb31039bd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   e00e4b17ecb3f       etcd-no-preload-619589                       kube-system
	
	
	==> coredns [7941decb9aa0e2ebe4de99bf8450f512bdbb39ec2aa3306ef5a8400615d2d659] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50987 - 41158 "HINFO IN 8175534550062909321.8811228830837318431. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0612273s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-619589
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-619589
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=no-preload-619589
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_06_57_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:06:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-619589
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:08:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:06:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:08:27 +0000   Sun, 23 Nov 2025 09:07:16 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-619589
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3483a19d-ff48-49f2-b35e-7cee468a4ef8
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-dhxwz                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-no-preload-619589                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-dp6kh                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      113s
	  kube-system                 kube-apiserver-no-preload-619589              250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-no-preload-619589     200m (2%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-proxy-qbkwc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-scheduler-no-preload-619589              100m (1%)     0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-9lfkm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-d5gfp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node no-preload-619589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node no-preload-619589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node no-preload-619589 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s               node-controller  Node no-preload-619589 event: Registered Node no-preload-619589 in Controller
	  Normal  NodeReady                98s                kubelet          Node no-preload-619589 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node no-preload-619589 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node no-preload-619589 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node no-preload-619589 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node no-preload-619589 event: Registered Node no-preload-619589 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [1f60fb31039bdce86058df87c7da04ea74adbafc6e245568fb6ab0413a0af065] <==
	{"level":"warn","ts":"2025-11-23T09:07:55.871029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.881026Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.889780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.897874Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.909844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.920275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.929794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.938117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.946632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.960342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.971247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.976072Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.987487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34068","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:55.992957Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.001743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34112","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.008786Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34124","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.017091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.024717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.033832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.041646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.060041Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.068473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.076672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34224","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:07:56.145987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34242","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:08:43.852658Z","caller":"traceutil/trace.go:172","msg":"trace[2118257107] transaction","detail":"{read_only:false; response_revision:649; number_of_response:1; }","duration":"102.26408ms","start":"2025-11-23T09:08:43.750374Z","end":"2025-11-23T09:08:43.852638Z","steps":["trace[2118257107] 'process raft request'  (duration: 102.135819ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:08:54 up  1:51,  0 user,  load average: 6.03, 4.65, 2.93
	Linux no-preload-619589 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [14ae20126a459a7fdf582ec5a271d47e3dca1e142c4ebf9e0350dd559be93573] <==
	I1123 09:07:57.727633       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:07:57.727929       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1123 09:07:57.728200       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:07:57.728275       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:07:57.728307       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:07:58Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:07:58.028312       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:07:58.126585       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:07:58.126622       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:07:58.126848       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:07:58.527687       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:07:58.527739       1 metrics.go:72] Registering metrics
	I1123 09:07:58.527838       1 controller.go:711] "Syncing nftables rules"
	I1123 09:08:08.028685       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:08.028746       1 main.go:301] handling current node
	I1123 09:08:18.032054       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:18.032110       1 main.go:301] handling current node
	I1123 09:08:28.028941       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:28.029037       1 main.go:301] handling current node
	I1123 09:08:38.029246       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:38.029285       1 main.go:301] handling current node
	I1123 09:08:48.036905       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1123 09:08:48.036945       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9b89533199bb2186454a2491d3cdd6e0a13a98d889f1739695a869ff190a6ad7] <==
	I1123 09:07:56.758860       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:07:56.758911       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:07:56.758926       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:07:56.758932       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:07:56.758937       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:07:56.760365       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1123 09:07:56.760740       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:07:56.760774       1 policy_source.go:240] refreshing policies
	I1123 09:07:56.760824       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:07:56.760864       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:07:56.767048       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 09:07:56.770448       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:07:56.810805       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:07:56.812221       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:07:57.118717       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:07:57.126922       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:07:57.195364       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:07:57.228941       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:07:57.244210       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:07:57.307508       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.204.243"}
	I1123 09:07:57.322298       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.99.16.228"}
	I1123 09:07:57.659324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:08:00.120675       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:08:00.471660       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:00.668559       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [6ac3ed6ad22f96a5e8a6803a48c463751843af2805ec1400ba36fedc144cf1d9] <==
	I1123 09:08:00.114576       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1123 09:08:00.115743       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:08:00.115783       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:08:00.115787       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:08:00.115853       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:08:00.116102       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:08:00.116809       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:08:00.117672       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:08:00.117761       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:08:00.117775       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:08:00.117838       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:08:00.117896       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:08:00.118047       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-619589"
	I1123 09:08:00.118118       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:08:00.118149       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:08:00.118255       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1123 09:08:00.119866       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1123 09:08:00.120016       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1123 09:08:00.120022       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:08:00.120147       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1123 09:08:00.121236       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:00.122376       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:08:00.125657       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:08:00.132939       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:08:00.142263       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b6302a52b3f07d041a82fd1384e80a24771f24468d8b556d23d35c13521bfcd3] <==
	I1123 09:07:57.546548       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:07:57.632666       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:07:57.733760       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:07:57.733801       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1123 09:07:57.733908       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:07:57.756900       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:07:57.757021       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:07:57.764811       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:07:57.765240       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:07:57.765280       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:57.766581       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:07:57.766593       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:07:57.766995       1 config.go:200] "Starting service config controller"
	I1123 09:07:57.767006       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:07:57.767148       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:07:57.767156       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:07:57.767479       1 config.go:309] "Starting node config controller"
	I1123 09:07:57.767499       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:07:57.867598       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:07:57.867613       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:07:57.867643       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:07:57.867648       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [a3bc253f74d935c63450cd3db07c274df85d3f1746da99b79e94bf15141d4c16] <==
	I1123 09:07:56.744292       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:07:57.394028       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:07:57.394058       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:07:57.399530       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:07:57.399564       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:07:57.399559       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:57.399569       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:07:57.399585       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:57.399590       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:07:57.400053       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:07:57.400420       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:07:57.499729       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:07:57.499837       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1123 09:07:57.500743       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:08:00 no-preload-619589 kubelet[722]: I1123 09:08:00.640694     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r64z\" (UniqueName: \"kubernetes.io/projected/712cadaa-769d-4ff2-a7d3-2d9a8a8bf56e-kube-api-access-9r64z\") pod \"kubernetes-dashboard-855c9754f9-d5gfp\" (UID: \"712cadaa-769d-4ff2-a7d3-2d9a8a8bf56e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d5gfp"
	Nov 23 09:08:00 no-preload-619589 kubelet[722]: I1123 09:08:00.640740     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqfvn\" (UniqueName: \"kubernetes.io/projected/618cb4b2-a55d-4d4b-b08e-59836433f857-kube-api-access-tqfvn\") pod \"dashboard-metrics-scraper-6ffb444bf9-9lfkm\" (UID: \"618cb4b2-a55d-4d4b-b08e-59836433f857\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm"
	Nov 23 09:08:05 no-preload-619589 kubelet[722]: I1123 09:08:05.216598     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-d5gfp" podStartSLOduration=1.17902052 podStartE2EDuration="5.216575334s" podCreationTimestamp="2025-11-23 09:08:00 +0000 UTC" firstStartedPulling="2025-11-23 09:08:00.873275635 +0000 UTC m=+6.844817125" lastFinishedPulling="2025-11-23 09:08:04.910830447 +0000 UTC m=+10.882371939" observedRunningTime="2025-11-23 09:08:05.215914193 +0000 UTC m=+11.187455705" watchObservedRunningTime="2025-11-23 09:08:05.216575334 +0000 UTC m=+11.188116850"
	Nov 23 09:08:05 no-preload-619589 kubelet[722]: I1123 09:08:05.799251     722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 09:08:08 no-preload-619589 kubelet[722]: I1123 09:08:08.193634     722 scope.go:117] "RemoveContainer" containerID="bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824"
	Nov 23 09:08:09 no-preload-619589 kubelet[722]: I1123 09:08:09.198195     722 scope.go:117] "RemoveContainer" containerID="bc6d5b213b2f6dd05620d6da8131d8f328db25e8c3d4fe2a16d3d90267b62824"
	Nov 23 09:08:09 no-preload-619589 kubelet[722]: I1123 09:08:09.198315     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:09 no-preload-619589 kubelet[722]: E1123 09:08:09.198505     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:10 no-preload-619589 kubelet[722]: I1123 09:08:10.202495     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:10 no-preload-619589 kubelet[722]: E1123 09:08:10.202665     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:12 no-preload-619589 kubelet[722]: I1123 09:08:12.676875     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:12 no-preload-619589 kubelet[722]: E1123 09:08:12.677121     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: I1123 09:08:27.127361     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: I1123 09:08:27.248448     722 scope.go:117] "RemoveContainer" containerID="133b067606a6c30c45e9d04b58352d7a92359119337c1d54259e4c5cf7989151"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: I1123 09:08:27.248805     722 scope.go:117] "RemoveContainer" containerID="3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	Nov 23 09:08:27 no-preload-619589 kubelet[722]: E1123 09:08:27.249536     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:32 no-preload-619589 kubelet[722]: I1123 09:08:32.676891     722 scope.go:117] "RemoveContainer" containerID="3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	Nov 23 09:08:32 no-preload-619589 kubelet[722]: E1123 09:08:32.677158     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:45 no-preload-619589 kubelet[722]: I1123 09:08:45.126666     722 scope.go:117] "RemoveContainer" containerID="3400c7d3fe5a0c4d0c4a74a2bbd7dfcc480fe5e231914a3065df81f0bdc925f6"
	Nov 23 09:08:45 no-preload-619589 kubelet[722]: E1123 09:08:45.126829     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-9lfkm_kubernetes-dashboard(618cb4b2-a55d-4d4b-b08e-59836433f857)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-9lfkm" podUID="618cb4b2-a55d-4d4b-b08e-59836433f857"
	Nov 23 09:08:49 no-preload-619589 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:08:49 no-preload-619589 kubelet[722]: I1123 09:08:49.667506     722 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 09:08:49 no-preload-619589 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:08:49 no-preload-619589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:08:49 no-preload-619589 systemd[1]: kubelet.service: Consumed 1.750s CPU time.
	
	
	==> kubernetes-dashboard [c7be853bc6291068babb574a6fed0026a725056d23096bb61e1d6ffc9a4a6fa1] <==
	2025/11/23 09:08:04 Using namespace: kubernetes-dashboard
	2025/11/23 09:08:04 Using in-cluster config to connect to apiserver
	2025/11/23 09:08:04 Using secret token for csrf signing
	2025/11/23 09:08:04 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:08:04 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:08:04 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:08:04 Generating JWE encryption key
	2025/11/23 09:08:04 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:08:04 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:08:05 Initializing JWE encryption key from synchronized object
	2025/11/23 09:08:05 Creating in-cluster Sidecar client
	2025/11/23 09:08:05 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:05 Serving insecurely on HTTP port: 9090
	2025/11/23 09:08:35 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:04 Starting overwatch
	
	
	==> storage-provisioner [23f1cd486ac33874f452790b2608401eddaa4a3bd8f96430c807fcfb5e1937b0] <==
	W1123 09:08:29.687086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:31.691204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:31.696769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:33.700831       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:33.705456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:35.708852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:35.714113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:37.718097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:37.724282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:39.728191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:39.733550       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:41.736794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:41.745302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:43.748368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:43.853786       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:45.856605       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:45.861599       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:47.864851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:47.868791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:49.872031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:49.876944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:51.879869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:51.884391       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:53.888432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:53.893933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f2f44d09fd70f07bb62953d4d3a45b5459cdef60ec014cf96fd80a6ed19a134b] <==
	I1123 09:07:57.508626       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:07:57.513077       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-619589 -n no-preload-619589
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-619589 -n no-preload-619589: exit status 2 (405.25866ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-619589 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (256.231786ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:04Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-531046
helpers_test.go:243: (dbg) docker inspect newest-cni-531046:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d",
	        "Created": "2025-11-23T09:08:44.244823038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 423020,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:08:44.287946038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/hosts",
	        "LogPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d-json.log",
	        "Name": "/newest-cni-531046",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-531046:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-531046",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d",
	                "LowerDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-531046",
	                "Source": "/var/lib/docker/volumes/newest-cni-531046/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-531046",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-531046",
	                "name.minikube.sigs.k8s.io": "newest-cni-531046",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5475c1429193a1a3aeb19c762be5395ff26228d24c3e8d304c429cb5a6c22fce",
	            "SandboxKey": "/var/run/docker/netns/5475c1429193",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-531046": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "544bf9b9ea19f870a2f79e0c461f820624a157b8c35e72ac8d0afba61525282f",
	                    "EndpointID": "2f70d309270c1239de6efc5d920cd84c8cfbb4753991d1270d72dc060e913a57",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "ee:8b:72:2d:f9:0c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-531046",
	                        "7ad7518812cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531046 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-054094 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-619589 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p no-preload-619589 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:08:38
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:08:38.063057  422371 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:08:38.063185  422371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:38.063194  422371 out.go:374] Setting ErrFile to fd 2...
	I1123 09:08:38.063199  422371 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:08:38.063491  422371 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:08:38.064118  422371 out.go:368] Setting JSON to false
	I1123 09:08:38.065952  422371 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6658,"bootTime":1763882260,"procs":454,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:08:38.066040  422371 start.go:143] virtualization: kvm guest
	I1123 09:08:38.068178  422371 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:08:38.069546  422371 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:08:38.069540  422371 notify.go:221] Checking for updates...
	I1123 09:08:38.071773  422371 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:08:38.073033  422371 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:08:38.078218  422371 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:08:38.079577  422371 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:08:38.080792  422371 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:08:38.082709  422371 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.082880  422371 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.083058  422371 config.go:182] Loaded profile config "no-preload-619589": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:38.083206  422371 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:08:38.112463  422371 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:08:38.112578  422371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:38.187928  422371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:08:38.174595805 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:38.188138  422371 docker.go:319] overlay module found
	I1123 09:08:38.190297  422371 out.go:179] * Using the docker driver based on user configuration
	I1123 09:08:38.194907  422371 start.go:309] selected driver: docker
	I1123 09:08:38.194937  422371 start.go:927] validating driver "docker" against <nil>
	I1123 09:08:38.194956  422371 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:08:38.195732  422371 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:08:38.276488  422371 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-23 09:08:38.264202445 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:08:38.276771  422371 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1123 09:08:38.276823  422371 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1123 09:08:38.277409  422371 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:08:38.280766  422371 out.go:179] * Using Docker driver with root privileges
	I1123 09:08:38.283289  422371 cni.go:84] Creating CNI manager for ""
	I1123 09:08:38.283395  422371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:38.283415  422371 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 09:08:38.283547  422371 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:38.285004  422371 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:08:38.286897  422371 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:08:38.288156  422371 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:08:38.290670  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:38.290731  422371 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:08:38.290730  422371 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:08:38.290745  422371 cache.go:65] Caching tarball of preloaded images
	I1123 09:08:38.290879  422371 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:08:38.290899  422371 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:08:38.291221  422371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:08:38.291304  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json: {Name:mk7c2c302507534cf8c19e4462e0d95cc43f265c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:38.318685  422371 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:08:38.318710  422371 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:08:38.318724  422371 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:08:38.318779  422371 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:08:38.318885  422371 start.go:364] duration metric: took 86.746µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:08:38.318916  422371 start.go:93] Provisioning new machine with config: &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:08:38.319041  422371 start.go:125] createHost starting for "" (driver="docker")
	W1123 09:08:35.978152  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:38.464711  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:39.107107  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:41.606781  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:38.321329  422371 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1123 09:08:38.321615  422371 start.go:159] libmachine.API.Create for "newest-cni-531046" (driver="docker")
	I1123 09:08:38.321673  422371 client.go:173] LocalClient.Create starting
	I1123 09:08:38.321773  422371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem
	I1123 09:08:38.321807  422371 main.go:143] libmachine: Decoding PEM data...
	I1123 09:08:38.321832  422371 main.go:143] libmachine: Parsing certificate...
	I1123 09:08:38.321892  422371 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem
	I1123 09:08:38.321919  422371 main.go:143] libmachine: Decoding PEM data...
	I1123 09:08:38.321937  422371 main.go:143] libmachine: Parsing certificate...
	I1123 09:08:38.322379  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1123 09:08:38.344902  422371 cli_runner.go:211] docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1123 09:08:38.345023  422371 network_create.go:284] running [docker network inspect newest-cni-531046] to gather additional debugging logs...
	I1123 09:08:38.345053  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046
	W1123 09:08:38.367879  422371 cli_runner.go:211] docker network inspect newest-cni-531046 returned with exit code 1
	I1123 09:08:38.367919  422371 network_create.go:287] error running [docker network inspect newest-cni-531046]: docker network inspect newest-cni-531046: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-531046 not found
	I1123 09:08:38.367936  422371 network_create.go:289] output of [docker network inspect newest-cni-531046]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-531046 not found
	
	** /stderr **
	I1123 09:08:38.368061  422371 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:38.393249  422371 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f35ea3fda0f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:67:c4:67:42:d0} reservation:<nil>}
	I1123 09:08:38.394053  422371 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-b5718ee288aa IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:5e:cf:46:ea:6c:f7} reservation:<nil>}
	I1123 09:08:38.394911  422371 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7539aab81c9c IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ca:4a:40:12:17:c0} reservation:<nil>}
	I1123 09:08:38.395851  422371 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000364fa0}
	I1123 09:08:38.395895  422371 network_create.go:124] attempt to create docker network newest-cni-531046 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1123 09:08:38.395992  422371 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-531046 newest-cni-531046
	I1123 09:08:38.468495  422371 network_create.go:108] docker network newest-cni-531046 192.168.76.0/24 created
	I1123 09:08:38.468547  422371 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-531046" container
	I1123 09:08:38.468621  422371 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1123 09:08:38.493023  422371 cli_runner.go:164] Run: docker volume create newest-cni-531046 --label name.minikube.sigs.k8s.io=newest-cni-531046 --label created_by.minikube.sigs.k8s.io=true
	I1123 09:08:38.517144  422371 oci.go:103] Successfully created a docker volume newest-cni-531046
	I1123 09:08:38.517276  422371 cli_runner.go:164] Run: docker run --rm --name newest-cni-531046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --entrypoint /usr/bin/test -v newest-cni-531046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1123 09:08:39.766029  422371 cli_runner.go:217] Completed: docker run --rm --name newest-cni-531046-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --entrypoint /usr/bin/test -v newest-cni-531046:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (1.248658868s)
	I1123 09:08:39.766066  422371 oci.go:107] Successfully prepared a docker volume newest-cni-531046
	I1123 09:08:39.766102  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:39.766113  422371 kic.go:194] Starting extracting preloaded images to volume ...
	I1123 09:08:39.766178  422371 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-531046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1123 09:08:40.961999  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:43.152956  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	I1123 09:08:44.168223  422371 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-531046:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.401997577s)
	I1123 09:08:44.168259  422371 kic.go:203] duration metric: took 4.402142717s to extract preloaded images to volume ...
	W1123 09:08:44.168355  422371 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1123 09:08:44.168395  422371 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1123 09:08:44.168452  422371 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1123 09:08:44.227151  422371 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-531046 --name newest-cni-531046 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-531046 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-531046 --network newest-cni-531046 --ip 192.168.76.2 --volume newest-cni-531046:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1123 09:08:44.537884  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Running}}
	I1123 09:08:44.557705  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.577764  422371 cli_runner.go:164] Run: docker exec newest-cni-531046 stat /var/lib/dpkg/alternatives/iptables
	I1123 09:08:44.624704  422371 oci.go:144] the created container "newest-cni-531046" has a running status.
	I1123 09:08:44.624733  422371 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa...
	I1123 09:08:44.736260  422371 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1123 09:08:44.766667  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.790662  422371 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1123 09:08:44.790697  422371 kic_runner.go:114] Args: [docker exec --privileged newest-cni-531046 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1123 09:08:44.838987  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:08:44.862887  422371 machine.go:94] provisionDockerMachine start ...
	I1123 09:08:44.863033  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:44.883292  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:44.883587  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:44.883606  422371 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:08:45.031996  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:08:45.032028  422371 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:08:45.032102  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.051178  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.051497  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.051524  422371 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:08:45.208664  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:08:45.208761  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.227777  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.228018  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.228039  422371 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:08:45.371606  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:08:45.371632  422371 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:08:45.371651  422371 ubuntu.go:190] setting up certificates
	I1123 09:08:45.371662  422371 provision.go:84] configureAuth start
	I1123 09:08:45.371721  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:45.389774  422371 provision.go:143] copyHostCerts
	I1123 09:08:45.389830  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:08:45.389843  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:08:45.389919  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:08:45.390046  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:08:45.390057  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:08:45.390089  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:08:45.390147  422371 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:08:45.390155  422371 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:08:45.390179  422371 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:08:45.390230  422371 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:08:45.541072  422371 provision.go:177] copyRemoteCerts
	I1123 09:08:45.541133  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:08:45.541174  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.562117  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:45.667284  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:08:45.686630  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:08:45.703786  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:08:45.722197  422371 provision.go:87] duration metric: took 350.521493ms to configureAuth
	I1123 09:08:45.722225  422371 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:08:45.722396  422371 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:08:45.722498  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:45.742391  422371 main.go:143] libmachine: Using SSH client type: native
	I1123 09:08:45.742648  422371 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33128 <nil> <nil>}
	I1123 09:08:45.742671  422371 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:08:46.039742  422371 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:08:46.039768  422371 machine.go:97] duration metric: took 1.176833241s to provisionDockerMachine
	I1123 09:08:46.039779  422371 client.go:176] duration metric: took 7.718098891s to LocalClient.Create
	I1123 09:08:46.039798  422371 start.go:167] duration metric: took 7.718185893s to libmachine.API.Create "newest-cni-531046"
	I1123 09:08:46.039814  422371 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:08:46.039831  422371 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:08:46.039890  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:08:46.039953  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.058468  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.161505  422371 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:08:46.164981  422371 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:08:46.165015  422371 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:08:46.165036  422371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:08:46.165097  422371 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:08:46.165191  422371 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:08:46.165314  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:08:46.172750  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:46.192188  422371 start.go:296] duration metric: took 152.355864ms for postStartSetup
	I1123 09:08:46.192503  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:46.210543  422371 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:08:46.210794  422371 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:08:46.210839  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.227599  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.326028  422371 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:08:46.330652  422371 start.go:128] duration metric: took 8.011592804s to createHost
	I1123 09:08:46.330683  422371 start.go:83] releasing machines lock for "newest-cni-531046", held for 8.011781957s
	I1123 09:08:46.330787  422371 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:08:46.349571  422371 ssh_runner.go:195] Run: cat /version.json
	I1123 09:08:46.349646  422371 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:08:46.349654  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.349732  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:08:46.369439  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.369528  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:08:46.522442  422371 ssh_runner.go:195] Run: systemctl --version
	I1123 09:08:46.528993  422371 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:08:46.564054  422371 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:08:46.569001  422371 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:08:46.569074  422371 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:08:46.595210  422371 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1123 09:08:46.595236  422371 start.go:496] detecting cgroup driver to use...
	I1123 09:08:46.595269  422371 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:08:46.595320  422371 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:08:46.613403  422371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:08:46.626130  422371 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:08:46.626178  422371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:08:46.642775  422371 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:08:46.659791  422371 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:08:46.745157  422371 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:08:46.834362  422371 docker.go:234] disabling docker service ...
	I1123 09:08:46.834431  422371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:08:46.852811  422371 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:08:46.865931  422371 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:08:46.951051  422371 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:08:47.039859  422371 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:08:47.052884  422371 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:08:47.067115  422371 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:08:47.067181  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.077039  422371 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:08:47.077101  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.086161  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.094941  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.103813  422371 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:08:47.112463  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.121360  422371 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.135303  422371 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:08:47.144129  422371 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:08:47.151329  422371 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:08:47.158513  422371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:47.240112  422371 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:08:47.375544  422371 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:08:47.375611  422371 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:08:47.379548  422371 start.go:564] Will wait 60s for crictl version
	I1123 09:08:47.379618  422371 ssh_runner.go:195] Run: which crictl
	I1123 09:08:47.383442  422371 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:08:47.410017  422371 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:08:47.410104  422371 ssh_runner.go:195] Run: crio --version
	I1123 09:08:47.437993  422371 ssh_runner.go:195] Run: crio --version
	I1123 09:08:47.468147  422371 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:08:47.469409  422371 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:08:47.488435  422371 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:08:47.492623  422371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:47.504612  422371 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1123 09:08:43.724599  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:46.105841  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:47.505728  422371 kubeadm.go:884] updating cluster {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:08:47.505853  422371 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:08:47.505903  422371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:47.538254  422371 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:47.538286  422371 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:08:47.538352  422371 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:08:47.565164  422371 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:08:47.565187  422371 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:08:47.565194  422371 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:08:47.565289  422371 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-531046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:08:47.565368  422371 ssh_runner.go:195] Run: crio config
	I1123 09:08:47.612790  422371 cni.go:84] Creating CNI manager for ""
	I1123 09:08:47.612810  422371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:47.612827  422371 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 09:08:47.612854  422371 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-531046 NodeName:newest-cni-531046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:08:47.613060  422371 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-531046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:08:47.613154  422371 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:08:47.621671  422371 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:08:47.621729  422371 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:08:47.630324  422371 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:08:47.644458  422371 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:08:47.660152  422371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 09:08:47.673398  422371 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:08:47.677342  422371 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:08:47.687438  422371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:08:47.763532  422371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:08:47.791488  422371 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046 for IP: 192.168.76.2
	I1123 09:08:47.791510  422371 certs.go:195] generating shared ca certs ...
	I1123 09:08:47.791526  422371 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:47.791688  422371 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:08:47.791739  422371 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:08:47.791753  422371 certs.go:257] generating profile certs ...
	I1123 09:08:47.791817  422371 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key
	I1123 09:08:47.791838  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt with IP's: []
	I1123 09:08:48.032392  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt ...
	I1123 09:08:48.032421  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.crt: {Name:mk976144a784e1f402ce91ac1356851c2af8ab52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.032598  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key ...
	I1123 09:08:48.032609  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key: {Name:mk8689b72f501cc91be234b56f833c373d45d735 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.032703  422371 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be
	I1123 09:08:48.032718  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1123 09:08:48.154375  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be ...
	I1123 09:08:48.154406  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be: {Name:mk85fa4339f770b1cc1a8ab21bd48c1535d0f2e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.154593  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be ...
	I1123 09:08:48.154615  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be: {Name:mkb7dccd9dbd4c24a7085c85e649fe0ef0b2bed2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.154724  422371 certs.go:382] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt.a1ea44be -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt
	I1123 09:08:48.154801  422371 certs.go:386] copying /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be -> /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key
	I1123 09:08:48.154856  422371 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key
	I1123 09:08:48.154871  422371 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt with IP's: []
	I1123 09:08:48.278947  422371 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt ...
	I1123 09:08:48.278983  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt: {Name:mk7a585a20c8ee02e9d23266d3061e7bc61a2b9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.279150  422371 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key ...
	I1123 09:08:48.279164  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key: {Name:mka0e35c957b541ffc74ca4dd08e09a485deaafe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:08:48.279336  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:08:48.279376  422371 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:08:48.279387  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:08:48.279410  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:08:48.279433  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:08:48.279455  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:08:48.279508  422371 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:08:48.280084  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:08:48.299345  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:08:48.317116  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:08:48.334058  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:08:48.350946  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:08:48.368229  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:08:48.385613  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:08:48.402695  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:08:48.421116  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:08:48.440103  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:08:48.458407  422371 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:08:48.476020  422371 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:08:48.488750  422371 ssh_runner.go:195] Run: openssl version
	I1123 09:08:48.494927  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:08:48.503050  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.506775  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.506822  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:08:48.543113  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:08:48.552172  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:08:48.560784  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.564768  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.564828  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:08:48.608294  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:08:48.617212  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:08:48.626532  422371 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.630942  422371 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.631085  422371 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:08:48.666782  422371 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:08:48.677518  422371 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:08:48.681151  422371 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1123 09:08:48.681214  422371 kubeadm.go:401] StartCluster: {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:08:48.681302  422371 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:08:48.681360  422371 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:08:48.707656  422371 cri.go:89] found id: ""
	I1123 09:08:48.707721  422371 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:08:48.715885  422371 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1123 09:08:48.724069  422371 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1123 09:08:48.724125  422371 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1123 09:08:48.731960  422371 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1123 09:08:48.731995  422371 kubeadm.go:158] found existing configuration files:
	
	I1123 09:08:48.732033  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1123 09:08:48.740868  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1123 09:08:48.740935  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1123 09:08:48.749362  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1123 09:08:48.757073  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1123 09:08:48.757137  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1123 09:08:48.764375  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1123 09:08:48.772291  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1123 09:08:48.772337  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1123 09:08:48.779794  422371 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1123 09:08:48.788802  422371 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1123 09:08:48.788876  422371 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1123 09:08:48.796307  422371 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1123 09:08:48.834042  422371 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1123 09:08:48.834787  422371 kubeadm.go:319] [preflight] Running pre-flight checks
	I1123 09:08:48.853738  422371 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1123 09:08:48.853845  422371 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1123 09:08:48.853911  422371 kubeadm.go:319] OS: Linux
	I1123 09:08:48.853979  422371 kubeadm.go:319] CGROUPS_CPU: enabled
	I1123 09:08:48.854052  422371 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1123 09:08:48.854114  422371 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1123 09:08:48.854191  422371 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1123 09:08:48.854267  422371 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1123 09:08:48.854347  422371 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1123 09:08:48.854431  422371 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1123 09:08:48.854474  422371 kubeadm.go:319] CGROUPS_IO: enabled
	I1123 09:08:48.912767  422371 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1123 09:08:48.912928  422371 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1123 09:08:48.913088  422371 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1123 09:08:48.923434  422371 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1123 09:08:45.461624  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:47.461833  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:48.106628  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:50.605768  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:52.608742  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:48.925558  422371 out.go:252]   - Generating certificates and keys ...
	I1123 09:08:48.925678  422371 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1123 09:08:48.925804  422371 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1123 09:08:49.034803  422371 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1123 09:08:49.289450  422371 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1123 09:08:49.448806  422371 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1123 09:08:49.726714  422371 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1123 09:08:49.777203  422371 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1123 09:08:49.777360  422371 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-531046] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:08:50.033016  422371 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1123 09:08:50.033189  422371 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-531046] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1123 09:08:50.554580  422371 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1123 09:08:50.798929  422371 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1123 09:08:51.130547  422371 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1123 09:08:51.130731  422371 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1123 09:08:51.382963  422371 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1123 09:08:51.876494  422371 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1123 09:08:52.259170  422371 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1123 09:08:52.556836  422371 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1123 09:08:52.743709  422371 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1123 09:08:52.744448  422371 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1123 09:08:52.748107  422371 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1123 09:08:52.749366  422371 out.go:252]   - Booting up control plane ...
	I1123 09:08:52.749494  422371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1123 09:08:52.749594  422371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1123 09:08:52.750313  422371 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1123 09:08:52.764609  422371 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1123 09:08:52.764769  422371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1123 09:08:52.771268  422371 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1123 09:08:52.772835  422371 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1123 09:08:52.772922  422371 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1123 09:08:52.899476  422371 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1123 09:08:52.899645  422371 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	W1123 09:08:49.462056  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:51.462324  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:53.463080  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	I1123 09:08:53.401479  422371 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 502.057505ms
	I1123 09:08:53.404193  422371 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1123 09:08:53.404306  422371 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1123 09:08:53.404432  422371 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1123 09:08:53.404552  422371 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1123 09:08:55.624836  422371 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 2.220425166s
	I1123 09:08:55.653273  422371 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.248953782s
	I1123 09:08:57.405805  422371 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.001472935s
	I1123 09:08:57.416685  422371 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1123 09:08:57.425837  422371 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1123 09:08:57.437341  422371 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1123 09:08:57.437668  422371 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-531046 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1123 09:08:57.446573  422371 kubeadm.go:319] [bootstrap-token] Using token: aik216.1v0uh6zpbi73ffjj
	W1123 09:08:55.107893  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	W1123 09:08:57.605950  415250 pod_ready.go:104] pod "coredns-66bc5c9577-k4bmj" is not "Ready", error: <nil>
	I1123 09:08:57.448919  422371 out.go:252]   - Configuring RBAC rules ...
	I1123 09:08:57.449076  422371 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1123 09:08:57.452836  422371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1123 09:08:57.457845  422371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1123 09:08:57.460583  422371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1123 09:08:57.463762  422371 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1123 09:08:57.466315  422371 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1123 09:08:57.811852  422371 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1123 09:08:58.229351  422371 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1123 09:08:58.811717  422371 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1123 09:08:58.812572  422371 kubeadm.go:319] 
	I1123 09:08:58.812656  422371 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1123 09:08:58.812689  422371 kubeadm.go:319] 
	I1123 09:08:58.812796  422371 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1123 09:08:58.812807  422371 kubeadm.go:319] 
	I1123 09:08:58.812834  422371 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1123 09:08:58.812909  422371 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1123 09:08:58.813012  422371 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1123 09:08:58.813021  422371 kubeadm.go:319] 
	I1123 09:08:58.813098  422371 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1123 09:08:58.813105  422371 kubeadm.go:319] 
	I1123 09:08:58.813188  422371 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1123 09:08:58.813203  422371 kubeadm.go:319] 
	I1123 09:08:58.813273  422371 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1123 09:08:58.813363  422371 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1123 09:08:58.813450  422371 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1123 09:08:58.813459  422371 kubeadm.go:319] 
	I1123 09:08:58.813536  422371 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1123 09:08:58.813638  422371 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1123 09:08:58.813647  422371 kubeadm.go:319] 
	I1123 09:08:58.813739  422371 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token aik216.1v0uh6zpbi73ffjj \
	I1123 09:08:58.813850  422371 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 \
	I1123 09:08:58.813881  422371 kubeadm.go:319] 	--control-plane 
	I1123 09:08:58.813890  422371 kubeadm.go:319] 
	I1123 09:08:58.814033  422371 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1123 09:08:58.814051  422371 kubeadm.go:319] 
	I1123 09:08:58.814147  422371 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token aik216.1v0uh6zpbi73ffjj \
	I1123 09:08:58.814275  422371 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:25411732a305fa463b7606eb24f85c2336be0d99fc4e5db190f3fbac97d3dca3 
	I1123 09:08:58.817093  422371 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1123 09:08:58.817189  422371 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1123 09:08:58.817216  422371 cni.go:84] Creating CNI manager for ""
	I1123 09:08:58.817225  422371 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:08:58.818566  422371 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W1123 09:08:55.962510  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:08:58.461601  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	I1123 09:08:58.605634  415250 pod_ready.go:94] pod "coredns-66bc5c9577-k4bmj" is "Ready"
	I1123 09:08:58.605666  415250 pod_ready.go:86] duration metric: took 35.004983235s for pod "coredns-66bc5c9577-k4bmj" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:58.608222  415250 pod_ready.go:83] waiting for pod "etcd-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:58.612193  415250 pod_ready.go:94] pod "etcd-embed-certs-529341" is "Ready"
	I1123 09:08:58.612218  415250 pod_ready.go:86] duration metric: took 3.972474ms for pod "etcd-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:58.613956  415250 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:58.617282  415250 pod_ready.go:94] pod "kube-apiserver-embed-certs-529341" is "Ready"
	I1123 09:08:58.617297  415250 pod_ready.go:86] duration metric: took 3.313755ms for pod "kube-apiserver-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:58.618873  415250 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:58.803984  415250 pod_ready.go:94] pod "kube-controller-manager-embed-certs-529341" is "Ready"
	I1123 09:08:58.804013  415250 pod_ready.go:86] duration metric: took 185.122916ms for pod "kube-controller-manager-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:59.004673  415250 pod_ready.go:83] waiting for pod "kube-proxy-xfwhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:59.404503  415250 pod_ready.go:94] pod "kube-proxy-xfwhk" is "Ready"
	I1123 09:08:59.404538  415250 pod_ready.go:86] duration metric: took 399.839755ms for pod "kube-proxy-xfwhk" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:08:59.604762  415250 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:00.004614  415250 pod_ready.go:94] pod "kube-scheduler-embed-certs-529341" is "Ready"
	I1123 09:09:00.004642  415250 pod_ready.go:86] duration metric: took 399.852492ms for pod "kube-scheduler-embed-certs-529341" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:00.004654  415250 pod_ready.go:40] duration metric: took 36.407832271s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:09:00.050225  415250 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:00.052031  415250 out.go:179] * Done! kubectl is now configured to use "embed-certs-529341" cluster and "default" namespace by default
	I1123 09:08:58.819653  422371 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1123 09:08:58.824836  422371 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1123 09:08:58.824853  422371 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1123 09:08:58.838769  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1123 09:08:59.050330  422371 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1123 09:08:59.050398  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:08:59.050421  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-531046 minikube.k8s.io/updated_at=2025_11_23T09_08_59_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1 minikube.k8s.io/name=newest-cni-531046 minikube.k8s.io/primary=true
	I1123 09:08:59.059696  422371 ops.go:34] apiserver oom_adj: -16
	I1123 09:08:59.132695  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:08:59.633562  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:00.132828  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:00.633440  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:01.133554  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:01.633134  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:02.133762  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:02.633382  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:03.133201  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:03.633554  422371 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1123 09:09:03.703021  422371 kubeadm.go:1114] duration metric: took 4.652688412s to wait for elevateKubeSystemPrivileges
	I1123 09:09:03.703060  422371 kubeadm.go:403] duration metric: took 15.021852827s to StartCluster
	I1123 09:09:03.703078  422371 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:03.703139  422371 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:03.705565  422371 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:03.705896  422371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1123 09:09:03.705914  422371 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:09:03.705989  422371 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:09:03.706083  422371 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-531046"
	I1123 09:09:03.706102  422371 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-531046"
	I1123 09:09:03.706117  422371 addons.go:70] Setting default-storageclass=true in profile "newest-cni-531046"
	I1123 09:09:03.706124  422371 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:03.706139  422371 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:03.706139  422371 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-531046"
	I1123 09:09:03.706445  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:03.706651  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:03.707208  422371 out.go:179] * Verifying Kubernetes components...
	I1123 09:09:03.708535  422371 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:03.730944  422371 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W1123 09:09:00.461895  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	W1123 09:09:02.462171  416838 pod_ready.go:104] pod "coredns-66bc5c9577-64rdm" is not "Ready", error: <nil>
	I1123 09:09:03.731281  422371 addons.go:239] Setting addon default-storageclass=true in "newest-cni-531046"
	I1123 09:09:03.731327  422371 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:03.731799  422371 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:03.732285  422371 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:03.732303  422371 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:09:03.732355  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:03.763386  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:03.763424  422371 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:03.763463  422371 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:09:03.763527  422371 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:03.792870  422371 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33128 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:03.813396  422371 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1123 09:09:03.869552  422371 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:03.887563  422371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:03.916880  422371 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:03.995084  422371 start.go:977] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1123 09:09:03.996629  422371 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:09:03.996688  422371 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:09:04.210811  422371 api_server.go:72] duration metric: took 504.857421ms to wait for apiserver process to appear ...
	I1123 09:09:04.210844  422371 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:09:04.210871  422371 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:04.216782  422371 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:09:04.217050  422371 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1123 09:09:04.217582  422371 api_server.go:141] control plane version: v1.34.1
	I1123 09:09:04.217607  422371 api_server.go:131] duration metric: took 6.754184ms to wait for apiserver health ...
	I1123 09:09:04.217618  422371 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:09:04.218397  422371 addons.go:530] duration metric: took 512.413943ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1123 09:09:04.219955  422371 system_pods.go:59] 8 kube-system pods found
	I1123 09:09:04.220016  422371 system_pods.go:61] "coredns-66bc5c9577-gk265" [0216f458-438b-4260-8320-f81fb2a01fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:04.220035  422371 system_pods.go:61] "etcd-newest-cni-531046" [1003fb1b-b28b-499c-b1e6-5c8b3d23d4bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:09:04.220048  422371 system_pods.go:61] "kindnet-pbp7c" [72da9944-1b43-4f59-b27a-78a6ebd8f3dc] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:09:04.220064  422371 system_pods.go:61] "kube-apiserver-newest-cni-531046" [92975545-d846-4326-9cc5-cf12a61f834b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:09:04.220077  422371 system_pods.go:61] "kube-controller-manager-newest-cni-531046" [769616d3-3a60-45b1-9246-80ccba447cb5] Running
	I1123 09:09:04.220090  422371 system_pods.go:61] "kube-proxy-4bpzx" [a0812143-d250-4445-85b7-dc7d4dbb23ad] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:09:04.220102  422371 system_pods.go:61] "kube-scheduler-newest-cni-531046" [f713d5f5-1579-48f4-b2f3-9340bfc94c84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:09:04.220112  422371 system_pods.go:61] "storage-provisioner" [d15b527f-4a7d-4cd4-bd83-5f0ec906909f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:04.220123  422371 system_pods.go:74] duration metric: took 2.497091ms to wait for pod list to return data ...
	I1123 09:09:04.220134  422371 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:09:04.222196  422371 default_sa.go:45] found service account: "default"
	I1123 09:09:04.222218  422371 default_sa.go:55] duration metric: took 2.076925ms for default service account to be created ...
	I1123 09:09:04.222233  422371 kubeadm.go:587] duration metric: took 516.283943ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:04.222256  422371 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:09:04.224622  422371 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:09:04.224648  422371 node_conditions.go:123] node cpu capacity is 8
	I1123 09:09:04.224667  422371 node_conditions.go:105] duration metric: took 2.405045ms to run NodePressure ...
	I1123 09:09:04.224680  422371 start.go:242] waiting for startup goroutines ...
	I1123 09:09:04.499953  422371 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-531046" context rescaled to 1 replicas
	I1123 09:09:04.500010  422371 start.go:247] waiting for cluster config update ...
	I1123 09:09:04.500027  422371 start.go:256] writing updated cluster config ...
	I1123 09:09:04.500363  422371 ssh_runner.go:195] Run: rm -f paused
	I1123 09:09:04.555047  422371 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:04.557667  422371 out.go:179] * Done! kubectl is now configured to use "newest-cni-531046" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.08737345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.088337378Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=d55e4c53-bf96-4c9d-9306-1c4b28767d8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.090801132Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.091739911Z" level=info msg="Ran pod sandbox 3d09158a07cb3e408a2a47f13722dbe9d24f502a044ed498f779b7343528cb14 with infra container: kube-system/kindnet-pbp7c/POD" id=d55e4c53-bf96-4c9d-9306-1c4b28767d8a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.092128181Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=25d6cd6f-99be-4f99-a15d-95bc45a68fa8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.093090987Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a0215e6e-49f5-4dda-adcc-24f6df1f365b name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.094299694Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=aff6b92d-23d5-4477-911f-1f2dacdfe399 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.095243341Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.096126Z" level=info msg="Ran pod sandbox 4350840296c3740e1f792bc4ac2c9a6267b2efa23271966fc67097439f9f7fca with infra container: kube-system/kube-proxy-4bpzx/POD" id=25d6cd6f-99be-4f99-a15d-95bc45a68fa8 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.097086774Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9e98c8ac-a89d-46d6-acd3-e9909ff2f7af name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.098176621Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=38811ec6-4151-4882-aba6-00116c8b0e26 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.10071495Z" level=info msg="Creating container: kube-system/kindnet-pbp7c/kindnet-cni" id=8250b950-be25-4b17-9b8e-8be5f627cbc3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.100817628Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.10208778Z" level=info msg="Creating container: kube-system/kube-proxy-4bpzx/kube-proxy" id=e205a2dd-fd86-45c7-b5a4-cda6accd6cff name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.10223139Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.105450473Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.105878348Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.107551423Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.108138084Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.134954905Z" level=info msg="Created container de5ba290d125593aa9416a2dcf532cdb367c07da34f89012c9adfc5ccb48730a: kube-system/kindnet-pbp7c/kindnet-cni" id=8250b950-be25-4b17-9b8e-8be5f627cbc3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.135715521Z" level=info msg="Starting container: de5ba290d125593aa9416a2dcf532cdb367c07da34f89012c9adfc5ccb48730a" id=6ad9b2f0-865d-4d4e-b53f-8a31709ba1e9 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.137518271Z" level=info msg="Started container" PID=1606 containerID=de5ba290d125593aa9416a2dcf532cdb367c07da34f89012c9adfc5ccb48730a description=kube-system/kindnet-pbp7c/kindnet-cni id=6ad9b2f0-865d-4d4e-b53f-8a31709ba1e9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3d09158a07cb3e408a2a47f13722dbe9d24f502a044ed498f779b7343528cb14
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.139082856Z" level=info msg="Created container 4705a523c10a2621ba8937094f8eca029fd018e69f60f5b7e119f167713ea354: kube-system/kube-proxy-4bpzx/kube-proxy" id=e205a2dd-fd86-45c7-b5a4-cda6accd6cff name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.139671653Z" level=info msg="Starting container: 4705a523c10a2621ba8937094f8eca029fd018e69f60f5b7e119f167713ea354" id=bbb79e3e-0caa-43e0-83e7-3ecad90f9f5a name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:09:04 newest-cni-531046 crio[774]: time="2025-11-23T09:09:04.143167159Z" level=info msg="Started container" PID=1607 containerID=4705a523c10a2621ba8937094f8eca029fd018e69f60f5b7e119f167713ea354 description=kube-system/kube-proxy-4bpzx/kube-proxy id=bbb79e3e-0caa-43e0-83e7-3ecad90f9f5a name=/runtime.v1.RuntimeService/StartContainer sandboxID=4350840296c3740e1f792bc4ac2c9a6267b2efa23271966fc67097439f9f7fca
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	4705a523c10a2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   4350840296c37       kube-proxy-4bpzx                            kube-system
	de5ba290d1255       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   3d09158a07cb3       kindnet-pbp7c                               kube-system
	26c90cf3a0c19       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   11 seconds ago      Running             kube-apiserver            0                   aa2990438945e       kube-apiserver-newest-cni-531046            kube-system
	fb24a39d8ebdd       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   11 seconds ago      Running             etcd                      0                   5ad638deb6d49       etcd-newest-cni-531046                      kube-system
	5fb9420e5b5dd       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   11 seconds ago      Running             kube-scheduler            0                   be56ce8806f86       kube-scheduler-newest-cni-531046            kube-system
	7ea523eee5a6a       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   11 seconds ago      Running             kube-controller-manager   0                   ba1c6174086de       kube-controller-manager-newest-cni-531046   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-531046
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-531046
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=newest-cni-531046
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_08_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:08:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-531046
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:08:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:08:58 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:08:58 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:08:58 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 09:08:58 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-531046
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                269c937c-ad30-473c-998a-d61087f9e09b
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-531046                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7s
	  kube-system                 kindnet-pbp7c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-531046             250m (3%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-controller-manager-newest-cni-531046    200m (2%)     0 (0%)      0 (0%)           0 (0%)         7s
	  kube-system                 kube-proxy-4bpzx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-531046             100m (1%)     0 (0%)      0 (0%)           0 (0%)         7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-531046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-531046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-531046 status is now: NodeHasSufficientPID
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s                 kubelet          Node newest-cni-531046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s                 kubelet          Node newest-cni-531046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s                 kubelet          Node newest-cni-531046 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-531046 event: Registered Node newest-cni-531046 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [fb24a39d8ebdd0b609d8411190c471fe0a89cb24d19969585d80ee17af9cf85f] <==
	{"level":"warn","ts":"2025-11-23T09:08:54.859073Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.866894Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.877686Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.885583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.893373Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.902337Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43286","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.909836Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.919340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.930156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.938821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.948856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.958573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.966958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.974587Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.983078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.991154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:54.997921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.004717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.011808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.020114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.027784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.034220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.056734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.065559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:55.072607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43564","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:05 up  1:51,  0 user,  load average: 5.55, 4.60, 2.93
	Linux newest-cni-531046 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [de5ba290d125593aa9416a2dcf532cdb367c07da34f89012c9adfc5ccb48730a] <==
	I1123 09:09:04.332307       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:09:04.332588       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:09:04.332738       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:09:04.332751       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:09:04.332774       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:09:04Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	E1123 09:09:04.536327       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1123 09:09:04.536361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:09:04.536367       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:09:04.536374       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:09:04.536447       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1123 09:09:04.536744       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1123 09:09:04.629657       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	
	
	==> kube-apiserver [26c90cf3a0c19f0eedcebd28baf2247b07f9a4414c94a48893aa1407c71f2946] <==
	I1123 09:08:55.683035       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:08:55.684418       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:08:55.684470       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:08:55.687335       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:08:55.687417       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1123 09:08:55.694411       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:08:55.694754       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:08:55.878236       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:08:56.585195       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1123 09:08:56.588724       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1123 09:08:56.588744       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:08:57.050338       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:08:57.088621       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:08:57.189551       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1123 09:08:57.195652       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1123 09:08:57.196757       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:08:57.201314       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:08:57.641742       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:08:58.218143       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:08:58.228425       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1123 09:08:58.235498       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:09:02.646179       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:09:02.650066       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:09:03.645007       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:09:03.745582       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [7ea523eee5a6a1d3f389681510bc445ed0a43af20c2aa5474e47284e3eea0c35] <==
	I1123 09:09:02.641058       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:09:02.641080       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:09:02.641060       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:09:02.641136       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:09:02.641149       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:09:02.641161       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:09:02.641181       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:09:02.641396       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:09:02.642476       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:09:02.642493       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:09:02.642512       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:09:02.642527       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:09:02.642600       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:09:02.642714       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:09:02.643359       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:09:02.643365       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:09:02.644733       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:09:02.645856       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:09:02.647251       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:09:02.651053       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:09:02.657464       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:09:02.657576       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:09:02.657710       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-531046"
	I1123 09:09:02.657763       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1123 09:09:02.660947       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [4705a523c10a2621ba8937094f8eca029fd018e69f60f5b7e119f167713ea354] <==
	I1123 09:09:04.185690       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:09:04.250590       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:09:04.351384       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:09:04.351462       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:09:04.351535       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:09:04.369593       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:09:04.369648       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:09:04.374724       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:09:04.375192       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:09:04.375231       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:09:04.376499       1 config.go:200] "Starting service config controller"
	I1123 09:09:04.376520       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:09:04.376546       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:09:04.376552       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:09:04.376579       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:09:04.376591       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:09:04.376655       1 config.go:309] "Starting node config controller"
	I1123 09:09:04.376678       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:09:04.376685       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:09:04.476732       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:09:04.476734       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:09:04.476754       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [5fb9420e5b5dd8b0eb2d15d8726893d83d4bc7e029e4008dd9457a860914a21f] <==
	E1123 09:08:55.651960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1123 09:08:55.652064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:08:55.652058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1123 09:08:55.652147       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:08:55.652166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1123 09:08:55.652269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:08:55.652277       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1123 09:08:55.652357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:08:55.652333       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1123 09:08:55.652619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:08:55.652744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:08:56.497149       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1123 09:08:56.516486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1123 09:08:56.550885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1123 09:08:56.601955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1123 09:08:56.635994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1123 09:08:56.649371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1123 09:08:56.653380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1123 09:08:56.680676       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1123 09:08:56.725220       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1123 09:08:56.729377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1123 09:08:56.741715       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1123 09:08:56.798421       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1123 09:08:56.813714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1123 09:08:58.649357       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:08:58 newest-cni-531046 kubelet[1314]: I1123 09:08:58.344296    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e68cbb806b653c554596242fe8008c08-etc-ca-certificates\") pod \"kube-controller-manager-newest-cni-531046\" (UID: \"e68cbb806b653c554596242fe8008c08\") " pod="kube-system/kube-controller-manager-newest-cni-531046"
	Nov 23 09:08:58 newest-cni-531046 kubelet[1314]: I1123 09:08:58.344327    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e68cbb806b653c554596242fe8008c08-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-531046\" (UID: \"e68cbb806b653c554596242fe8008c08\") " pod="kube-system/kube-controller-manager-newest-cni-531046"
	Nov 23 09:08:58 newest-cni-531046 kubelet[1314]: I1123 09:08:58.344346    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/24c9a0339642957de441eb7396c26208-etcd-data\") pod \"etcd-newest-cni-531046\" (UID: \"24c9a0339642957de441eb7396c26208\") " pod="kube-system/etcd-newest-cni-531046"
	Nov 23 09:08:58 newest-cni-531046 kubelet[1314]: I1123 09:08:58.344366    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b583f4cba29358ce3556d6c4e772f2da-ca-certs\") pod \"kube-apiserver-newest-cni-531046\" (UID: \"b583f4cba29358ce3556d6c4e772f2da\") " pod="kube-system/kube-apiserver-newest-cni-531046"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.034942    1314 apiserver.go:52] "Watching apiserver"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.043056    1314 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.073545    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-531046"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.073661    1314 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-531046"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: E1123 09:08:59.082086    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531046\" already exists" pod="kube-system/etcd-newest-cni-531046"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: E1123 09:08:59.085351    1314 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531046\" already exists" pod="kube-system/kube-apiserver-newest-cni-531046"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.122417    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-531046" podStartSLOduration=1.122371127 podStartE2EDuration="1.122371127s" podCreationTimestamp="2025-11-23 09:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:08:59.122340762 +0000 UTC m=+1.156023790" watchObservedRunningTime="2025-11-23 09:08:59.122371127 +0000 UTC m=+1.156054148"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.130372    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-531046" podStartSLOduration=1.1303504420000001 podStartE2EDuration="1.130350442s" podCreationTimestamp="2025-11-23 09:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:08:59.130291298 +0000 UTC m=+1.163974326" watchObservedRunningTime="2025-11-23 09:08:59.130350442 +0000 UTC m=+1.164033469"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.148860    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-531046" podStartSLOduration=1.148836312 podStartE2EDuration="1.148836312s" podCreationTimestamp="2025-11-23 09:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:08:59.13991206 +0000 UTC m=+1.173595091" watchObservedRunningTime="2025-11-23 09:08:59.148836312 +0000 UTC m=+1.182519335"
	Nov 23 09:08:59 newest-cni-531046 kubelet[1314]: I1123 09:08:59.149012    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-531046" podStartSLOduration=1.149003016 podStartE2EDuration="1.149003016s" podCreationTimestamp="2025-11-23 09:08:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:08:59.14899123 +0000 UTC m=+1.182674257" watchObservedRunningTime="2025-11-23 09:08:59.149003016 +0000 UTC m=+1.182686046"
	Nov 23 09:09:02 newest-cni-531046 kubelet[1314]: I1123 09:09:02.675672    1314 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 09:09:02 newest-cni-531046 kubelet[1314]: I1123 09:09:02.676355    1314 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.883878    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-xtables-lock\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.883945    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0812143-d250-4445-85b7-dc7d4dbb23ad-lib-modules\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.884027    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-lib-modules\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.884058    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a0812143-d250-4445-85b7-dc7d4dbb23ad-kube-proxy\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.884125    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnbhk\" (UniqueName: \"kubernetes.io/projected/a0812143-d250-4445-85b7-dc7d4dbb23ad-kube-api-access-rnbhk\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.884264    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-cni-cfg\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.884304    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-958gj\" (UniqueName: \"kubernetes.io/projected/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-kube-api-access-958gj\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:03 newest-cni-531046 kubelet[1314]: I1123 09:09:03.884333    1314 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0812143-d250-4445-85b7-dc7d4dbb23ad-xtables-lock\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:05 newest-cni-531046 kubelet[1314]: I1123 09:09:05.101148    1314 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4bpzx" podStartSLOduration=2.101124356 podStartE2EDuration="2.101124356s" podCreationTimestamp="2025-11-23 09:09:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-23 09:09:05.100957475 +0000 UTC m=+7.134640504" watchObservedRunningTime="2025-11-23 09:09:05.101124356 +0000 UTC m=+7.134807385"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531046 -n newest-cni-531046
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-531046 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gk265 storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner: exit status 1 (59.566732ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gk265" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-529341 --alsologtostderr -v=1
E1123 09:09:12.393353  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-529341 --alsologtostderr -v=1: exit status 80 (1.775093935s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-529341 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:09:11.794317  429349 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:11.794592  429349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:11.794604  429349 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:11.794609  429349 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:11.794812  429349 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:11.795094  429349 out.go:368] Setting JSON to false
	I1123 09:09:11.795117  429349 mustload.go:66] Loading cluster: embed-certs-529341
	I1123 09:09:11.795498  429349 config.go:182] Loaded profile config "embed-certs-529341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:11.795927  429349 cli_runner.go:164] Run: docker container inspect embed-certs-529341 --format={{.State.Status}}
	I1123 09:09:11.813959  429349 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:09:11.814210  429349 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:11.871876  429349 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-23 09:09:11.862365047 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:11.872570  429349 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-529341 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:09:11.874447  429349 out.go:179] * Pausing node embed-certs-529341 ... 
	I1123 09:09:11.875476  429349 host.go:66] Checking if "embed-certs-529341" exists ...
	I1123 09:09:11.875724  429349 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:11.875769  429349 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-529341
	I1123 09:09:11.895157  429349 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33118 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/embed-certs-529341/id_rsa Username:docker}
	I1123 09:09:11.994753  429349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:12.006717  429349 pause.go:52] kubelet running: true
	I1123 09:09:12.006786  429349 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:12.169688  429349 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:12.169777  429349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:12.234296  429349 cri.go:89] found id: "55ae874f44dc2af85018340071210dea3f20e2e8d9f97c97c756c4502243dc3e"
	I1123 09:09:12.234320  429349 cri.go:89] found id: "4c812cd4da92b4f8e5c65f43f8329812f3a2909eda6cf6aecc65a2735df04ae1"
	I1123 09:09:12.234326  429349 cri.go:89] found id: "f2040fd9793ad37371368b566e46ecdaef1bdd733df18fb23cc2af7669381df4"
	I1123 09:09:12.234331  429349 cri.go:89] found id: "b39a3a89a3260a4d13829f17d219e1eff98e35fb95ac427b3eeff00f500fc9cf"
	I1123 09:09:12.234336  429349 cri.go:89] found id: "24605aef520e058c1174bc9967a0b76ad5e754e93c2fe3760330c218fd7991da"
	I1123 09:09:12.234346  429349 cri.go:89] found id: "73227818d4fc9086a936e1b1251ac49dc9f565e9664d34c892e0e5e5c62a8920"
	I1123 09:09:12.234351  429349 cri.go:89] found id: "e146e17fa358a72d868c4916214f772a64934dfcef476610c2ec35b50a15e5a8"
	I1123 09:09:12.234355  429349 cri.go:89] found id: "9203249d1159b35eb2d2457002eb5a7611462190dc85089a0e28c7fd11b1257a"
	I1123 09:09:12.234359  429349 cri.go:89] found id: "51c0b9d62ee3b397d97f51cf65c1c8166419f7ce47ad5cd1f86257c9ff8d2429"
	I1123 09:09:12.234379  429349 cri.go:89] found id: "68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52"
	I1123 09:09:12.234388  429349 cri.go:89] found id: "1fcc5add6e61a5d923fcf319bbde8c2bbb3114452f9be5a89a324af683e58bd4"
	I1123 09:09:12.234392  429349 cri.go:89] found id: ""
	I1123 09:09:12.234436  429349 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:12.246341  429349 retry.go:31] will retry after 306.908505ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:12Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:12.553939  429349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:12.569471  429349 pause.go:52] kubelet running: false
	I1123 09:09:12.569547  429349 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:12.736225  429349 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:12.736308  429349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:12.807585  429349 cri.go:89] found id: "55ae874f44dc2af85018340071210dea3f20e2e8d9f97c97c756c4502243dc3e"
	I1123 09:09:12.807607  429349 cri.go:89] found id: "4c812cd4da92b4f8e5c65f43f8329812f3a2909eda6cf6aecc65a2735df04ae1"
	I1123 09:09:12.807611  429349 cri.go:89] found id: "f2040fd9793ad37371368b566e46ecdaef1bdd733df18fb23cc2af7669381df4"
	I1123 09:09:12.807614  429349 cri.go:89] found id: "b39a3a89a3260a4d13829f17d219e1eff98e35fb95ac427b3eeff00f500fc9cf"
	I1123 09:09:12.807618  429349 cri.go:89] found id: "24605aef520e058c1174bc9967a0b76ad5e754e93c2fe3760330c218fd7991da"
	I1123 09:09:12.807621  429349 cri.go:89] found id: "73227818d4fc9086a936e1b1251ac49dc9f565e9664d34c892e0e5e5c62a8920"
	I1123 09:09:12.807624  429349 cri.go:89] found id: "e146e17fa358a72d868c4916214f772a64934dfcef476610c2ec35b50a15e5a8"
	I1123 09:09:12.807627  429349 cri.go:89] found id: "9203249d1159b35eb2d2457002eb5a7611462190dc85089a0e28c7fd11b1257a"
	I1123 09:09:12.807629  429349 cri.go:89] found id: "51c0b9d62ee3b397d97f51cf65c1c8166419f7ce47ad5cd1f86257c9ff8d2429"
	I1123 09:09:12.807648  429349 cri.go:89] found id: "68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52"
	I1123 09:09:12.807654  429349 cri.go:89] found id: "1fcc5add6e61a5d923fcf319bbde8c2bbb3114452f9be5a89a324af683e58bd4"
	I1123 09:09:12.807658  429349 cri.go:89] found id: ""
	I1123 09:09:12.807695  429349 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:12.818873  429349 retry.go:31] will retry after 403.256012ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:12Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:13.222450  429349 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:13.235638  429349 pause.go:52] kubelet running: false
	I1123 09:09:13.235704  429349 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:13.414460  429349 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:13.414543  429349 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:13.483091  429349 cri.go:89] found id: "55ae874f44dc2af85018340071210dea3f20e2e8d9f97c97c756c4502243dc3e"
	I1123 09:09:13.483116  429349 cri.go:89] found id: "4c812cd4da92b4f8e5c65f43f8329812f3a2909eda6cf6aecc65a2735df04ae1"
	I1123 09:09:13.483123  429349 cri.go:89] found id: "f2040fd9793ad37371368b566e46ecdaef1bdd733df18fb23cc2af7669381df4"
	I1123 09:09:13.483128  429349 cri.go:89] found id: "b39a3a89a3260a4d13829f17d219e1eff98e35fb95ac427b3eeff00f500fc9cf"
	I1123 09:09:13.483132  429349 cri.go:89] found id: "24605aef520e058c1174bc9967a0b76ad5e754e93c2fe3760330c218fd7991da"
	I1123 09:09:13.483138  429349 cri.go:89] found id: "73227818d4fc9086a936e1b1251ac49dc9f565e9664d34c892e0e5e5c62a8920"
	I1123 09:09:13.483205  429349 cri.go:89] found id: "e146e17fa358a72d868c4916214f772a64934dfcef476610c2ec35b50a15e5a8"
	I1123 09:09:13.483253  429349 cri.go:89] found id: "9203249d1159b35eb2d2457002eb5a7611462190dc85089a0e28c7fd11b1257a"
	I1123 09:09:13.483262  429349 cri.go:89] found id: "51c0b9d62ee3b397d97f51cf65c1c8166419f7ce47ad5cd1f86257c9ff8d2429"
	I1123 09:09:13.483270  429349 cri.go:89] found id: "68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52"
	I1123 09:09:13.483276  429349 cri.go:89] found id: "1fcc5add6e61a5d923fcf319bbde8c2bbb3114452f9be5a89a324af683e58bd4"
	I1123 09:09:13.483280  429349 cri.go:89] found id: ""
	I1123 09:09:13.483331  429349 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:13.497929  429349 out.go:203] 
	W1123 09:09:13.499247  429349 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:13Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:13Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:09:13.499266  429349 out.go:285] * 
	* 
	W1123 09:09:13.505793  429349 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:09:13.506964  429349 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-529341 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-529341
helpers_test.go:243: (dbg) docker inspect embed-certs-529341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc",
	        "Created": "2025-11-23T09:07:06.148431191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:08:12.954998675Z",
	            "FinishedAt": "2025-11-23T09:08:11.584183819Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/hosts",
	        "LogPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc-json.log",
	        "Name": "/embed-certs-529341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-529341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-529341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc",
	                "LowerDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-529341",
	                "Source": "/var/lib/docker/volumes/embed-certs-529341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-529341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-529341",
	                "name.minikube.sigs.k8s.io": "embed-certs-529341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "33c4cf994abfe99f6ca1aeea1a3b09694eed94faa188f1a4b2b91c434937c00f",
	            "SandboxKey": "/var/run/docker/netns/33c4cf994abf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-529341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b80987257b925fe4e7d7324e318d1724b2e83e5fe12e18005bf9298153219f99",
	                    "EndpointID": "ee13b6b249158f975952c12572eff2160eee69586df50fafd430afbaa27c2b52",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "c6:e9:37:27:56:b4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-529341",
	                        "cd25ec65ad7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341: exit status 2 (343.017413ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-529341 logs -n 25
E1123 09:09:14.954731  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-529341 logs -n 25: (1.129380937s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-531046 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-531046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ image   │ embed-certs-529341 image list --format=json                                                                                                                                                                                                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p embed-certs-529341 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:09:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:09:09.393949  428718 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:09.394192  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394201  428718 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:09.394206  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394406  428718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:09.394917  428718 out.go:368] Setting JSON to false
	I1123 09:09:09.396361  428718 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6689,"bootTime":1763882260,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:09:09.396420  428718 start.go:143] virtualization: kvm guest
	I1123 09:09:09.398144  428718 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:09:09.399754  428718 notify.go:221] Checking for updates...
	I1123 09:09:09.399766  428718 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:09:09.402731  428718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:09:09.404051  428718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:09.405353  428718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:09:09.406721  428718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:09:09.408298  428718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:09:09.410076  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:09.410631  428718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:09:09.438677  428718 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:09:09.438842  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.499289  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.488360013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.499392  428718 docker.go:319] overlay module found
	I1123 09:09:09.501298  428718 out.go:179] * Using the docker driver based on existing profile
	I1123 09:09:09.502521  428718 start.go:309] selected driver: docker
	I1123 09:09:09.502539  428718 start.go:927] validating driver "docker" against &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.502628  428718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:09:09.503156  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.567159  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.555013229 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.567643  428718 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:09.567695  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:09.567768  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:09.567832  428718 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.569790  428718 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:09:09.570956  428718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:09:09.573142  428718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:09:09.574347  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:09.574385  428718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:09:09.574403  428718 cache.go:65] Caching tarball of preloaded images
	I1123 09:09:09.574469  428718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:09:09.574518  428718 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:09:09.574535  428718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:09:09.574672  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.596348  428718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:09:09.596375  428718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:09:09.596395  428718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:09:09.596441  428718 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:09:09.596513  428718 start.go:364] duration metric: took 46.31µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:09:09.596535  428718 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:09:09.596546  428718 fix.go:54] fixHost starting: 
	I1123 09:09:09.596775  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.615003  428718 fix.go:112] recreateIfNeeded on newest-cni-531046: state=Stopped err=<nil>
	W1123 09:09:09.615044  428718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:09:10.962211  416838 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:09:10.962238  416838 pod_ready.go:86] duration metric: took 41.505811079s for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.964724  416838 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.968263  416838 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.968282  416838 pod_ready.go:86] duration metric: took 3.536222ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.969953  416838 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.973341  416838 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.973358  416838 pod_ready.go:86] duration metric: took 3.359803ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.975266  416838 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.160920  416838 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:11.160945  416838 pod_ready.go:86] duration metric: took 185.660534ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.361102  416838 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.760631  416838 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:09:11.760661  416838 pod_ready.go:86] duration metric: took 399.534821ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.961014  416838 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360788  416838 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:12.360818  416838 pod_ready.go:86] duration metric: took 399.779479ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360830  416838 pod_ready.go:40] duration metric: took 42.908765939s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:09:12.404049  416838 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:12.405650  416838 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:08:33 embed-certs-529341 crio[567]: time="2025-11-23T09:08:33.784727927Z" level=info msg="Created container 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=5ad5fa77-3823-47ef-a051-48672ca44d85 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:33 embed-certs-529341 crio[567]: time="2025-11-23T09:08:33.785635203Z" level=info msg="Starting container: 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671" id=fc97ee7b-f7f9-4421-a083-a58692c6a695 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:33 embed-certs-529341 crio[567]: time="2025-11-23T09:08:33.788466998Z" level=info msg="Started container" PID=1736 containerID=2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper id=fc97ee7b-f7f9-4421-a083-a58692c6a695 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5dfedeb7bb51c67bfbedee7ff9c29b104685cdeef555dc237e079af82565649
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.224747192Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3e954054-2019-4a57-8630-aab025766c6d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.228813636Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d704328-f0c1-4ed3-a4d3-5cf307f73726 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.231925988Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=42b765f6-2664-45a0-aed3-32da115bdb76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.232179478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.242558262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.243494253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.298879454Z" level=info msg="Created container 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=42b765f6-2664-45a0-aed3-32da115bdb76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.300216979Z" level=info msg="Starting container: 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2" id=08a1d9ab-2fba-4ac2-ac9a-3dd5bcb53cc6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.302870568Z" level=info msg="Started container" PID=1745 containerID=9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper id=08a1d9ab-2fba-4ac2-ac9a-3dd5bcb53cc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5dfedeb7bb51c67bfbedee7ff9c29b104685cdeef555dc237e079af82565649
	Nov 23 09:08:35 embed-certs-529341 crio[567]: time="2025-11-23T09:08:35.229266882Z" level=info msg="Removing container: 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671" id=ddd3244d-7bee-4b0c-8307-f43aef4b1c50 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:35 embed-certs-529341 crio[567]: time="2025-11-23T09:08:35.238724789Z" level=info msg="Removed container 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=ddd3244d-7bee-4b0c-8307-f43aef4b1c50 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.140168559Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8e12f669-4a2f-4626-a8b8-f73f5c6f2c5a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.141613521Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6d6eaebc-c0e2-433a-b7aa-d550c5dbf703 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.142791491Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=8e4689c6-c00c-42fe-a0c5-85fc6a40079f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.142939044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.150167994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.15085134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.183426906Z" level=info msg="Created container 68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=8e4689c6-c00c-42fe-a0c5-85fc6a40079f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.184593092Z" level=info msg="Starting container: 68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52" id=9b35ebe3-f0b9-4f18-923d-3a547b53d699 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.187126709Z" level=info msg="Started container" PID=1759 containerID=68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper id=9b35ebe3-f0b9-4f18-923d-3a547b53d699 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5dfedeb7bb51c67bfbedee7ff9c29b104685cdeef555dc237e079af82565649
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.284998208Z" level=info msg="Removing container: 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2" id=630e0ffe-d0ca-4d4b-8fc8-73483f496fe4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.298260814Z" level=info msg="Removed container 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=630e0ffe-d0ca-4d4b-8fc8-73483f496fe4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	68b41eb209db9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   b5dfedeb7bb51       dashboard-metrics-scraper-6ffb444bf9-62dbw   kubernetes-dashboard
	1fcc5add6e61a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   b04250bf97df7       kubernetes-dashboard-855c9754f9-rvlmt        kubernetes-dashboard
	55ae874f44dc2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Running             storage-provisioner         1                   e132719962db7       storage-provisioner                          kube-system
	a4083bb27db36       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   b768d63a3e28f       busybox                                      default
	4c812cd4da92b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   3f87efe921d05       coredns-66bc5c9577-k4bmj                     kube-system
	f2040fd9793ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   e132719962db7       storage-provisioner                          kube-system
	b39a3a89a3260       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   a14e404f6a09d       kindnet-twlcq                                kube-system
	24605aef520e0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   226aeedea31d8       kube-proxy-xfwhk                             kube-system
	73227818d4fc9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   8fd1ed6ae0c0b       etcd-embed-certs-529341                      kube-system
	e146e17fa358a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   2e5cf9c877130       kube-scheduler-embed-certs-529341            kube-system
	9203249d1159b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   fa2f0ae5069ef       kube-controller-manager-embed-certs-529341   kube-system
	51c0b9d62ee3b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   34049b849ed77       kube-apiserver-embed-certs-529341            kube-system
	
	
	==> coredns [4c812cd4da92b4f8e5c65f43f8329812f3a2909eda6cf6aecc65a2735df04ae1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59785 - 26556 "HINFO IN 30320258078332738.1692373228013941160. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.071437658s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-529341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-529341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-529341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_07_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:07:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-529341
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:09:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-529341
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                98603fac-552b-4d14-ae49-954d6ab02bae
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 coredns-66bc5c9577-k4bmj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     107s
	  kube-system                 etcd-embed-certs-529341                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-twlcq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-embed-certs-529341             250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-embed-certs-529341    200m (2%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-proxy-xfwhk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-embed-certs-529341             100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-62dbw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rvlmt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 106s               kube-proxy       
	  Normal  Starting                 51s                kube-proxy       
	  Normal  NodeHasSufficientMemory  113s               kubelet          Node embed-certs-529341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    113s               kubelet          Node embed-certs-529341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     113s               kubelet          Node embed-certs-529341 status is now: NodeHasSufficientPID
	  Normal  Starting                 113s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           108s               node-controller  Node embed-certs-529341 event: Registered Node embed-certs-529341 in Controller
	  Normal  NodeReady                96s                kubelet          Node embed-certs-529341 status is now: NodeReady
	  Normal  Starting                 55s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)  kubelet          Node embed-certs-529341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)  kubelet          Node embed-certs-529341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)  kubelet          Node embed-certs-529341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node embed-certs-529341 event: Registered Node embed-certs-529341 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [73227818d4fc9086a936e1b1251ac49dc9f565e9664d34c892e0e5e5c62a8920] <==
	{"level":"warn","ts":"2025-11-23T09:08:21.456508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.462428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.468711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.479059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.484921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.491052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.497319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.503238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.509174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.515217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.521197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.527110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.533146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.539857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.552159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.555443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.561413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.567333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.619697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41726","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:08:36.353210Z","caller":"traceutil/trace.go:172","msg":"trace[541934201] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"117.50899ms","start":"2025-11-23T09:08:36.235678Z","end":"2025-11-23T09:08:36.353187Z","steps":["trace[541934201] 'process raft request'  (duration: 117.365367ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:08:36.362631Z","caller":"traceutil/trace.go:172","msg":"trace[1484500734] transaction","detail":"{read_only:false; response_revision:599; number_of_response:1; }","duration":"123.403892ms","start":"2025-11-23T09:08:36.239200Z","end":"2025-11-23T09:08:36.362604Z","steps":["trace[1484500734] 'process raft request'  (duration: 123.181938ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:08:42.755473Z","caller":"traceutil/trace.go:172","msg":"trace[1897104766] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"110.243935ms","start":"2025-11-23T09:08:42.645212Z","end":"2025-11-23T09:08:42.755456Z","steps":["trace[1897104766] 'process raft request'  (duration: 83.730417ms)","trace[1897104766] 'compare'  (duration: 26.403441ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:08:43.453312Z","caller":"traceutil/trace.go:172","msg":"trace[1836705768] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"116.83929ms","start":"2025-11-23T09:08:43.336442Z","end":"2025-11-23T09:08:43.453281Z","steps":["trace[1836705768] 'process raft request'  (duration: 116.663303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:08:43.720579Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.14294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-k4bmj\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-11-23T09:08:43.720714Z","caller":"traceutil/trace.go:172","msg":"trace[1544840451] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-k4bmj; range_end:; response_count:1; response_revision:610; }","duration":"118.323526ms","start":"2025-11-23T09:08:43.602370Z","end":"2025-11-23T09:08:43.720693Z","steps":["trace[1544840451] 'range keys from in-memory index tree'  (duration: 117.972312ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:09:14 up  1:51,  0 user,  load average: 4.94, 4.50, 2.92
	Linux embed-certs-529341 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b39a3a89a3260a4d13829f17d219e1eff98e35fb95ac427b3eeff00f500fc9cf] <==
	I1123 09:08:22.686549       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:08:22.686805       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 09:08:22.687015       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:08:22.687034       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:08:22.687061       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:08:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:08:22.889534       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:08:22.890152       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:08:22.890227       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1123 09:08:22.890501       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 09:08:22.983126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:08:24.090906       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:08:24.090950       1 metrics.go:72] Registering metrics
	I1123 09:08:24.091087       1 controller.go:711] "Syncing nftables rules"
	I1123 09:08:32.890425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:08:32.890498       1 main.go:301] handling current node
	I1123 09:08:42.892178       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:08:42.892218       1 main.go:301] handling current node
	I1123 09:08:52.889582       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:08:52.889618       1 main.go:301] handling current node
	I1123 09:09:02.892800       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:09:02.892841       1 main.go:301] handling current node
	I1123 09:09:12.896070       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:09:12.896100       1 main.go:301] handling current node
	
	
	==> kube-apiserver [51c0b9d62ee3b397d97f51cf65c1c8166419f7ce47ad5cd1f86257c9ff8d2429] <==
	I1123 09:08:22.086184       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:08:22.086686       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:08:22.087090       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:08:22.087158       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:08:22.087160       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 09:08:22.087235       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:08:22.087247       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:08:22.087253       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:08:22.087260       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:08:22.087442       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:08:22.087848       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:08:22.087945       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:08:22.092473       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:08:22.118227       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:08:22.213155       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:08:22.323027       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:08:22.349494       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:08:22.368505       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:08:22.375169       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:08:22.412433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.97.230"}
	I1123 09:08:22.427833       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.252.82"}
	I1123 09:08:22.992039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:08:25.465417       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:08:25.616150       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:26.016168       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9203249d1159b35eb2d2457002eb5a7611462190dc85089a0e28c7fd11b1257a] <==
	I1123 09:08:25.375429       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:08:25.381760       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:08:25.381776       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:08:25.381782       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:08:25.386389       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:08:25.388581       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:08:25.390868       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:08:25.406125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:25.411695       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:08:25.411729       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:08:25.411700       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:08:25.411851       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:08:25.411902       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:08:25.411919       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:08:25.411948       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:08:25.412221       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:08:25.412442       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:08:25.412917       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:08:25.412946       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:08:25.413113       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:08:25.414198       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:08:25.420382       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:25.431549       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:08:25.433829       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:08:25.434898       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [24605aef520e058c1174bc9967a0b76ad5e754e93c2fe3760330c218fd7991da] <==
	I1123 09:08:22.554697       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:08:22.612600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:08:22.712741       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:08:22.712775       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 09:08:22.712864       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:08:22.731247       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:08:22.731297       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:08:22.736358       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:08:22.737176       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:08:22.737226       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:22.739099       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:08:22.739122       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:08:22.739160       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:08:22.739166       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:08:22.739166       1 config.go:309] "Starting node config controller"
	I1123 09:08:22.739179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:08:22.739153       1 config.go:200] "Starting service config controller"
	I1123 09:08:22.739197       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:08:22.839786       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:08:22.839838       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:08:22.839849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:08:22.839827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [e146e17fa358a72d868c4916214f772a64934dfcef476610c2ec35b50a15e5a8] <==
	I1123 09:08:20.758415       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:08:22.003168       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:08:22.003295       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:08:22.003316       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:08:22.003327       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:08:22.046845       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:08:22.046883       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:22.049908       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:22.049959       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:22.050350       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:08:22.050694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:08:22.150549       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:08:23 embed-certs-529341 kubelet[733]: I1123 09:08:23.166121     733 scope.go:117] "RemoveContainer" containerID="f2040fd9793ad37371368b566e46ecdaef1bdd733df18fb23cc2af7669381df4"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034505     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba819018-1e9f-492a-8282-cbb1801bf72e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rvlmt\" (UID: \"ba819018-1e9f-492a-8282-cbb1801bf72e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rvlmt"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034549     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsdv\" (UniqueName: \"kubernetes.io/projected/ba819018-1e9f-492a-8282-cbb1801bf72e-kube-api-access-vbsdv\") pod \"kubernetes-dashboard-855c9754f9-rvlmt\" (UID: \"ba819018-1e9f-492a-8282-cbb1801bf72e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rvlmt"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034572     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/18538be9-2dd3-4ea1-890e-d78a3d24eff0-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-62dbw\" (UID: \"18538be9-2dd3-4ea1-890e-d78a3d24eff0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034589     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thcgh\" (UniqueName: \"kubernetes.io/projected/18538be9-2dd3-4ea1-890e-d78a3d24eff0-kube-api-access-thcgh\") pod \"dashboard-metrics-scraper-6ffb444bf9-62dbw\" (UID: \"18538be9-2dd3-4ea1-890e-d78a3d24eff0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw"
	Nov 23 09:08:28 embed-certs-529341 kubelet[733]: I1123 09:08:28.405827     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 09:08:34 embed-certs-529341 kubelet[733]: I1123 09:08:34.223930     733 scope.go:117] "RemoveContainer" containerID="2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671"
	Nov 23 09:08:34 embed-certs-529341 kubelet[733]: I1123 09:08:34.242456     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rvlmt" podStartSLOduration=4.60619963 podStartE2EDuration="9.242206992s" podCreationTimestamp="2025-11-23 09:08:25 +0000 UTC" firstStartedPulling="2025-11-23 09:08:26.172474316 +0000 UTC m=+7.130273785" lastFinishedPulling="2025-11-23 09:08:30.808481664 +0000 UTC m=+11.766281147" observedRunningTime="2025-11-23 09:08:31.26376155 +0000 UTC m=+12.221561037" watchObservedRunningTime="2025-11-23 09:08:34.242206992 +0000 UTC m=+15.200006479"
	Nov 23 09:08:35 embed-certs-529341 kubelet[733]: I1123 09:08:35.227788     733 scope.go:117] "RemoveContainer" containerID="2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671"
	Nov 23 09:08:35 embed-certs-529341 kubelet[733]: I1123 09:08:35.227944     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:35 embed-certs-529341 kubelet[733]: E1123 09:08:35.228175     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:08:36 embed-certs-529341 kubelet[733]: I1123 09:08:36.232446     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:36 embed-certs-529341 kubelet[733]: E1123 09:08:36.232680     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:08:44 embed-certs-529341 kubelet[733]: I1123 09:08:44.167022     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:44 embed-certs-529341 kubelet[733]: E1123 09:08:44.167208     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: I1123 09:08:55.139201     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: I1123 09:08:55.283695     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: I1123 09:08:55.283909     733 scope.go:117] "RemoveContainer" containerID="68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: E1123 09:08:55.284129     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:09:04 embed-certs-529341 kubelet[733]: I1123 09:09:04.166844     733 scope.go:117] "RemoveContainer" containerID="68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52"
	Nov 23 09:09:04 embed-certs-529341 kubelet[733]: E1123 09:09:04.167077     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: kubelet.service: Consumed 1.686s CPU time.
	
	
	==> kubernetes-dashboard [1fcc5add6e61a5d923fcf319bbde8c2bbb3114452f9be5a89a324af683e58bd4] <==
	2025/11/23 09:08:30 Using namespace: kubernetes-dashboard
	2025/11/23 09:08:30 Using in-cluster config to connect to apiserver
	2025/11/23 09:08:30 Using secret token for csrf signing
	2025/11/23 09:08:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:08:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:08:30 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:08:30 Generating JWE encryption key
	2025/11/23 09:08:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:08:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:08:31 Initializing JWE encryption key from synchronized object
	2025/11/23 09:08:31 Creating in-cluster Sidecar client
	2025/11/23 09:08:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:31 Serving insecurely on HTTP port: 9090
	2025/11/23 09:09:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:30 Starting overwatch
	
	
	==> storage-provisioner [55ae874f44dc2af85018340071210dea3f20e2e8d9f97c97c756c4502243dc3e] <==
	W1123 09:08:50.790834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:52.794744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:52.799513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:54.803640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:54.809428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:56.813344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:56.818029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:58.821592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:58.826680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:00.829088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:00.832812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:02.836285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:02.841026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:04.844172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:04.849036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:06.854383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:06.858625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:08.861930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:08.866111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:10.870117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:10.877345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:12.880866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:12.884696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:14.888935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:14.893446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f2040fd9793ad37371368b566e46ecdaef1bdd733df18fb23cc2af7669381df4] <==
	I1123 09:08:22.531190       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:08:22.535129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-529341 -n embed-certs-529341
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-529341 -n embed-certs-529341: exit status 2 (340.737605ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-529341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-529341
helpers_test.go:243: (dbg) docker inspect embed-certs-529341:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc",
	        "Created": "2025-11-23T09:07:06.148431191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 415451,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:08:12.954998675Z",
	            "FinishedAt": "2025-11-23T09:08:11.584183819Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/hostname",
	        "HostsPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/hosts",
	        "LogPath": "/var/lib/docker/containers/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc/cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc-json.log",
	        "Name": "/embed-certs-529341",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-529341:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-529341",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cd25ec65ad7d7225aaf7739d7d3946f2d586e4220fd3d8a39c26bd159fb65cbc",
	                "LowerDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b273d65210e041a5d49ab128cb15a16823014667a3e5c0578a92356cb061a5/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-529341",
	                "Source": "/var/lib/docker/volumes/embed-certs-529341/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-529341",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-529341",
	                "name.minikube.sigs.k8s.io": "embed-certs-529341",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "33c4cf994abfe99f6ca1aeea1a3b09694eed94faa188f1a4b2b91c434937c00f",
	            "SandboxKey": "/var/run/docker/netns/33c4cf994abf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-529341": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b80987257b925fe4e7d7324e318d1724b2e83e5fe12e18005bf9298153219f99",
	                    "EndpointID": "ee13b6b249158f975952c12572eff2160eee69586df50fafd430afbaa27c2b52",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "c6:e9:37:27:56:b4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-529341",
	                        "cd25ec65ad7d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341: exit status 2 (350.906593ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-529341 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-529341 logs -n 25: (1.292685565s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:07 UTC │
	│ start   │ -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p embed-certs-529341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │                     │
	│ stop    │ -p embed-certs-529341 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:07 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-531046 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-531046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ image   │ embed-certs-529341 image list --format=json                                                                                                                                                                                                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p embed-certs-529341 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:09:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:09:09.393949  428718 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:09.394192  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394201  428718 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:09.394206  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394406  428718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:09.394917  428718 out.go:368] Setting JSON to false
	I1123 09:09:09.396361  428718 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6689,"bootTime":1763882260,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:09:09.396420  428718 start.go:143] virtualization: kvm guest
	I1123 09:09:09.398144  428718 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:09:09.399754  428718 notify.go:221] Checking for updates...
	I1123 09:09:09.399766  428718 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:09:09.402731  428718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:09:09.404051  428718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:09.405353  428718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:09:09.406721  428718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:09:09.408298  428718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:09:09.410076  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:09.410631  428718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:09:09.438677  428718 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:09:09.438842  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.499289  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.488360013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.499392  428718 docker.go:319] overlay module found
	I1123 09:09:09.501298  428718 out.go:179] * Using the docker driver based on existing profile
	I1123 09:09:09.502521  428718 start.go:309] selected driver: docker
	I1123 09:09:09.502539  428718 start.go:927] validating driver "docker" against &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.502628  428718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:09:09.503156  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.567159  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.555013229 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.567643  428718 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:09.567695  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:09.567768  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:09.567832  428718 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.569790  428718 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:09:09.570956  428718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:09:09.573142  428718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:09:09.574347  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:09.574385  428718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:09:09.574403  428718 cache.go:65] Caching tarball of preloaded images
	I1123 09:09:09.574469  428718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:09:09.574518  428718 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:09:09.574535  428718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:09:09.574672  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.596348  428718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:09:09.596375  428718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:09:09.596395  428718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:09:09.596441  428718 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:09:09.596513  428718 start.go:364] duration metric: took 46.31µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:09:09.596535  428718 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:09:09.596546  428718 fix.go:54] fixHost starting: 
	I1123 09:09:09.596775  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.615003  428718 fix.go:112] recreateIfNeeded on newest-cni-531046: state=Stopped err=<nil>
	W1123 09:09:09.615044  428718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:09:10.962211  416838 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:09:10.962238  416838 pod_ready.go:86] duration metric: took 41.505811079s for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.964724  416838 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.968263  416838 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.968282  416838 pod_ready.go:86] duration metric: took 3.536222ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.969953  416838 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.973341  416838 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.973358  416838 pod_ready.go:86] duration metric: took 3.359803ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.975266  416838 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.160920  416838 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:11.160945  416838 pod_ready.go:86] duration metric: took 185.660534ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.361102  416838 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.760631  416838 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:09:11.760661  416838 pod_ready.go:86] duration metric: took 399.534821ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.961014  416838 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360788  416838 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:12.360818  416838 pod_ready.go:86] duration metric: took 399.779479ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360830  416838 pod_ready.go:40] duration metric: took 42.908765939s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:09:12.404049  416838 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:12.405650  416838 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	I1123 09:09:09.616814  428718 out.go:252] * Restarting existing docker container for "newest-cni-531046" ...
	I1123 09:09:09.616880  428718 cli_runner.go:164] Run: docker start newest-cni-531046
	I1123 09:09:09.907672  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.927111  428718 kic.go:430] container "newest-cni-531046" state is running.
	I1123 09:09:09.927497  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:09.947618  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.947894  428718 machine.go:94] provisionDockerMachine start ...
	I1123 09:09:09.948010  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:09.972117  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:09.972394  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:09.972403  428718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:09:09.973126  428718 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56888->127.0.0.1:33133: read: connection reset by peer
	I1123 09:09:13.118820  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.118862  428718 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:09:13.118924  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.137403  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.137732  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.137754  428718 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:09:13.292448  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.292567  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.312639  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.312883  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.312902  428718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:09:13.456742  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:09:13.456786  428718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:09:13.456823  428718 ubuntu.go:190] setting up certificates
	I1123 09:09:13.456836  428718 provision.go:84] configureAuth start
	I1123 09:09:13.456907  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:13.476479  428718 provision.go:143] copyHostCerts
	I1123 09:09:13.476551  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:09:13.476578  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:09:13.476667  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:09:13.476821  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:09:13.476836  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:09:13.476878  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:09:13.476962  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:09:13.476997  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:09:13.477040  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:09:13.477127  428718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:09:13.551036  428718 provision.go:177] copyRemoteCerts
	I1123 09:09:13.551092  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:09:13.551131  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.570388  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:13.674461  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:09:13.692480  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:09:13.711416  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:09:13.728169  428718 provision.go:87] duration metric: took 271.314005ms to configureAuth
	I1123 09:09:13.728202  428718 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:09:13.728420  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:13.728554  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.747174  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.747495  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.747521  428718 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:09:14.068767  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:09:14.068799  428718 machine.go:97] duration metric: took 4.120887468s to provisionDockerMachine
	I1123 09:09:14.068814  428718 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:09:14.068829  428718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:09:14.068900  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:09:14.068945  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.088061  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.190042  428718 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:09:14.193920  428718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:09:14.193952  428718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:09:14.193975  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:09:14.194042  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:09:14.194148  428718 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:09:14.194286  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:09:14.202503  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:14.221567  428718 start.go:296] duration metric: took 152.735823ms for postStartSetup
	I1123 09:09:14.221638  428718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:09:14.221678  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.241073  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.341192  428718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:09:14.345736  428718 fix.go:56] duration metric: took 4.749184186s for fixHost
	I1123 09:09:14.345761  428718 start.go:83] releasing machines lock for "newest-cni-531046", held for 4.749236041s
	I1123 09:09:14.345829  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:14.367424  428718 ssh_runner.go:195] Run: cat /version.json
	I1123 09:09:14.367491  428718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:09:14.367498  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.367566  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.387208  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.388547  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.489744  428718 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:14.553172  428718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:09:14.597710  428718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:09:14.603833  428718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:09:14.603919  428718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:09:14.613685  428718 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:09:14.613716  428718 start.go:496] detecting cgroup driver to use...
	I1123 09:09:14.613753  428718 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:09:14.613814  428718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:09:14.633265  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:09:14.647148  428718 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:09:14.647207  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:09:14.663589  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:09:14.677157  428718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:09:14.766215  428718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:09:14.858401  428718 docker.go:234] disabling docker service ...
	I1123 09:09:14.858470  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:09:14.873312  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:09:14.888170  428718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:09:14.983215  428718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:09:15.073382  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:09:15.086608  428718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:09:15.101866  428718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:09:15.101935  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.111226  428718 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:09:15.111288  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.120834  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.130549  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.140695  428718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:09:15.148854  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.157864  428718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.166336  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.176067  428718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:09:15.183505  428718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:09:15.191000  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.295741  428718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:09:15.433605  428718 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:09:15.433681  428718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:09:15.439424  428718 start.go:564] Will wait 60s for crictl version
	I1123 09:09:15.439490  428718 ssh_runner.go:195] Run: which crictl
	I1123 09:09:15.444124  428718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:09:15.469766  428718 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:09:15.469843  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.500595  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.539580  428718 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:09:15.540673  428718 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:09:15.559666  428718 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:09:15.564697  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.581138  428718 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Nov 23 09:08:33 embed-certs-529341 crio[567]: time="2025-11-23T09:08:33.784727927Z" level=info msg="Created container 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=5ad5fa77-3823-47ef-a051-48672ca44d85 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:33 embed-certs-529341 crio[567]: time="2025-11-23T09:08:33.785635203Z" level=info msg="Starting container: 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671" id=fc97ee7b-f7f9-4421-a083-a58692c6a695 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:33 embed-certs-529341 crio[567]: time="2025-11-23T09:08:33.788466998Z" level=info msg="Started container" PID=1736 containerID=2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper id=fc97ee7b-f7f9-4421-a083-a58692c6a695 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5dfedeb7bb51c67bfbedee7ff9c29b104685cdeef555dc237e079af82565649
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.224747192Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3e954054-2019-4a57-8630-aab025766c6d name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.228813636Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=3d704328-f0c1-4ed3-a4d3-5cf307f73726 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.231925988Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=42b765f6-2664-45a0-aed3-32da115bdb76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.232179478Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.242558262Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.243494253Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.298879454Z" level=info msg="Created container 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=42b765f6-2664-45a0-aed3-32da115bdb76 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.300216979Z" level=info msg="Starting container: 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2" id=08a1d9ab-2fba-4ac2-ac9a-3dd5bcb53cc6 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:34 embed-certs-529341 crio[567]: time="2025-11-23T09:08:34.302870568Z" level=info msg="Started container" PID=1745 containerID=9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper id=08a1d9ab-2fba-4ac2-ac9a-3dd5bcb53cc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5dfedeb7bb51c67bfbedee7ff9c29b104685cdeef555dc237e079af82565649
	Nov 23 09:08:35 embed-certs-529341 crio[567]: time="2025-11-23T09:08:35.229266882Z" level=info msg="Removing container: 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671" id=ddd3244d-7bee-4b0c-8307-f43aef4b1c50 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:35 embed-certs-529341 crio[567]: time="2025-11-23T09:08:35.238724789Z" level=info msg="Removed container 2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=ddd3244d-7bee-4b0c-8307-f43aef4b1c50 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.140168559Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8e12f669-4a2f-4626-a8b8-f73f5c6f2c5a name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.141613521Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6d6eaebc-c0e2-433a-b7aa-d550c5dbf703 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.142791491Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=8e4689c6-c00c-42fe-a0c5-85fc6a40079f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.142939044Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.150167994Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.15085134Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.183426906Z" level=info msg="Created container 68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=8e4689c6-c00c-42fe-a0c5-85fc6a40079f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.184593092Z" level=info msg="Starting container: 68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52" id=9b35ebe3-f0b9-4f18-923d-3a547b53d699 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.187126709Z" level=info msg="Started container" PID=1759 containerID=68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper id=9b35ebe3-f0b9-4f18-923d-3a547b53d699 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b5dfedeb7bb51c67bfbedee7ff9c29b104685cdeef555dc237e079af82565649
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.284998208Z" level=info msg="Removing container: 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2" id=630e0ffe-d0ca-4d4b-8fc8-73483f496fe4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:55 embed-certs-529341 crio[567]: time="2025-11-23T09:08:55.298260814Z" level=info msg="Removed container 9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw/dashboard-metrics-scraper" id=630e0ffe-d0ca-4d4b-8fc8-73483f496fe4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	68b41eb209db9       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   b5dfedeb7bb51       dashboard-metrics-scraper-6ffb444bf9-62dbw   kubernetes-dashboard
	1fcc5add6e61a       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   b04250bf97df7       kubernetes-dashboard-855c9754f9-rvlmt        kubernetes-dashboard
	55ae874f44dc2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Running             storage-provisioner         1                   e132719962db7       storage-provisioner                          kube-system
	a4083bb27db36       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   b768d63a3e28f       busybox                                      default
	4c812cd4da92b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   3f87efe921d05       coredns-66bc5c9577-k4bmj                     kube-system
	f2040fd9793ad       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   e132719962db7       storage-provisioner                          kube-system
	b39a3a89a3260       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   a14e404f6a09d       kindnet-twlcq                                kube-system
	24605aef520e0       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   226aeedea31d8       kube-proxy-xfwhk                             kube-system
	73227818d4fc9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   8fd1ed6ae0c0b       etcd-embed-certs-529341                      kube-system
	e146e17fa358a       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   2e5cf9c877130       kube-scheduler-embed-certs-529341            kube-system
	9203249d1159b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   fa2f0ae5069ef       kube-controller-manager-embed-certs-529341   kube-system
	51c0b9d62ee3b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   34049b849ed77       kube-apiserver-embed-certs-529341            kube-system
	
	
	==> coredns [4c812cd4da92b4f8e5c65f43f8329812f3a2909eda6cf6aecc65a2735df04ae1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59785 - 26556 "HINFO IN 30320258078332738.1692373228013941160. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.071437658s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-529341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-529341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=embed-certs-529341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_07_22_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:07:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-529341
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:09:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:08:52 +0000   Sun, 23 Nov 2025 09:07:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-529341
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                98603fac-552b-4d14-ae49-954d6ab02bae
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-66bc5c9577-k4bmj                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-529341                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         115s
	  kube-system                 kindnet-twlcq                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-529341             250m (3%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-controller-manager-embed-certs-529341    200m (2%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 kube-proxy-xfwhk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-529341             100m (1%)     0 (0%)      0 (0%)           0 (0%)         115s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-62dbw    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-rvlmt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 108s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  115s               kubelet          Node embed-certs-529341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s               kubelet          Node embed-certs-529341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s               kubelet          Node embed-certs-529341 status is now: NodeHasSufficientPID
	  Normal  Starting                 115s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-529341 event: Registered Node embed-certs-529341 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-529341 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-529341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-529341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-529341 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-529341 event: Registered Node embed-certs-529341 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [73227818d4fc9086a936e1b1251ac49dc9f565e9664d34c892e0e5e5c62a8920] <==
	{"level":"warn","ts":"2025-11-23T09:08:21.456508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.462428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.468711Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.479059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.484921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.491052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.497319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.503238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.509174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.515217Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.521197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.527110Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.533146Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.539857Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.552159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.555443Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41682","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.561413Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41694","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.567333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:21.619697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41726","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-23T09:08:36.353210Z","caller":"traceutil/trace.go:172","msg":"trace[541934201] transaction","detail":"{read_only:false; response_revision:598; number_of_response:1; }","duration":"117.50899ms","start":"2025-11-23T09:08:36.235678Z","end":"2025-11-23T09:08:36.353187Z","steps":["trace[541934201] 'process raft request'  (duration: 117.365367ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:08:36.362631Z","caller":"traceutil/trace.go:172","msg":"trace[1484500734] transaction","detail":"{read_only:false; response_revision:599; number_of_response:1; }","duration":"123.403892ms","start":"2025-11-23T09:08:36.239200Z","end":"2025-11-23T09:08:36.362604Z","steps":["trace[1484500734] 'process raft request'  (duration: 123.181938ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-23T09:08:42.755473Z","caller":"traceutil/trace.go:172","msg":"trace[1897104766] transaction","detail":"{read_only:false; response_revision:608; number_of_response:1; }","duration":"110.243935ms","start":"2025-11-23T09:08:42.645212Z","end":"2025-11-23T09:08:42.755456Z","steps":["trace[1897104766] 'process raft request'  (duration: 83.730417ms)","trace[1897104766] 'compare'  (duration: 26.403441ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-23T09:08:43.453312Z","caller":"traceutil/trace.go:172","msg":"trace[1836705768] transaction","detail":"{read_only:false; response_revision:610; number_of_response:1; }","duration":"116.83929ms","start":"2025-11-23T09:08:43.336442Z","end":"2025-11-23T09:08:43.453281Z","steps":["trace[1836705768] 'process raft request'  (duration: 116.663303ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:08:43.720579Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"118.14294ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-k4bmj\" limit:1 ","response":"range_response_count:1 size:5936"}
	{"level":"info","ts":"2025-11-23T09:08:43.720714Z","caller":"traceutil/trace.go:172","msg":"trace[1544840451] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-k4bmj; range_end:; response_count:1; response_revision:610; }","duration":"118.323526ms","start":"2025-11-23T09:08:43.602370Z","end":"2025-11-23T09:08:43.720693Z","steps":["trace[1544840451] 'range keys from in-memory index tree'  (duration: 117.972312ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:09:16 up  1:51,  0 user,  load average: 4.94, 4.50, 2.92
	Linux embed-certs-529341 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b39a3a89a3260a4d13829f17d219e1eff98e35fb95ac427b3eeff00f500fc9cf] <==
	I1123 09:08:22.686549       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:08:22.686805       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1123 09:08:22.687015       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:08:22.687034       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:08:22.687061       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:08:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:08:22.889534       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:08:22.890152       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:08:22.890227       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	E1123 09:08:22.890501       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: connect: connection refused" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1123 09:08:22.983126       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:08:24.090906       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:08:24.090950       1 metrics.go:72] Registering metrics
	I1123 09:08:24.091087       1 controller.go:711] "Syncing nftables rules"
	I1123 09:08:32.890425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:08:32.890498       1 main.go:301] handling current node
	I1123 09:08:42.892178       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:08:42.892218       1 main.go:301] handling current node
	I1123 09:08:52.889582       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:08:52.889618       1 main.go:301] handling current node
	I1123 09:09:02.892800       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:09:02.892841       1 main.go:301] handling current node
	I1123 09:09:12.896070       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1123 09:09:12.896100       1 main.go:301] handling current node
	
	
	==> kube-apiserver [51c0b9d62ee3b397d97f51cf65c1c8166419f7ce47ad5cd1f86257c9ff8d2429] <==
	I1123 09:08:22.086184       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:08:22.086686       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1123 09:08:22.087090       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1123 09:08:22.087158       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1123 09:08:22.087160       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1123 09:08:22.087235       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:08:22.087247       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:08:22.087253       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:08:22.087260       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:08:22.087442       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:08:22.087848       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:08:22.087945       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1123 09:08:22.092473       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:08:22.118227       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:08:22.213155       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:08:22.323027       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:08:22.349494       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:08:22.368505       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:08:22.375169       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:08:22.412433       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.97.230"}
	I1123 09:08:22.427833       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.252.82"}
	I1123 09:08:22.992039       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:08:25.465417       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:08:25.616150       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:26.016168       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [9203249d1159b35eb2d2457002eb5a7611462190dc85089a0e28c7fd11b1257a] <==
	I1123 09:08:25.375429       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:08:25.381760       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:08:25.381776       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:08:25.381782       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:08:25.386389       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1123 09:08:25.388581       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:08:25.390868       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:08:25.406125       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:25.411695       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:08:25.411729       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1123 09:08:25.411700       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:08:25.411851       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:08:25.411902       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1123 09:08:25.411919       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:08:25.411948       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:08:25.412221       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1123 09:08:25.412442       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:08:25.412917       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:08:25.412946       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:08:25.413113       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1123 09:08:25.414198       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:08:25.420382       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:25.431549       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:08:25.433829       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:08:25.434898       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	
	
	==> kube-proxy [24605aef520e058c1174bc9967a0b76ad5e754e93c2fe3760330c218fd7991da] <==
	I1123 09:08:22.554697       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:08:22.612600       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:08:22.712741       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:08:22.712775       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1123 09:08:22.712864       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:08:22.731247       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:08:22.731297       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:08:22.736358       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:08:22.737176       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:08:22.737226       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:22.739099       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:08:22.739122       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:08:22.739160       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:08:22.739166       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:08:22.739166       1 config.go:309] "Starting node config controller"
	I1123 09:08:22.739179       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:08:22.739153       1 config.go:200] "Starting service config controller"
	I1123 09:08:22.739197       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:08:22.839786       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:08:22.839838       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:08:22.839849       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:08:22.839827       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [e146e17fa358a72d868c4916214f772a64934dfcef476610c2ec35b50a15e5a8] <==
	I1123 09:08:20.758415       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:08:22.003168       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:08:22.003295       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:08:22.003316       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:08:22.003327       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:08:22.046845       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:08:22.046883       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:22.049908       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:22.049959       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:22.050350       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:08:22.050694       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:08:22.150549       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:08:23 embed-certs-529341 kubelet[733]: I1123 09:08:23.166121     733 scope.go:117] "RemoveContainer" containerID="f2040fd9793ad37371368b566e46ecdaef1bdd733df18fb23cc2af7669381df4"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034505     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/ba819018-1e9f-492a-8282-cbb1801bf72e-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-rvlmt\" (UID: \"ba819018-1e9f-492a-8282-cbb1801bf72e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rvlmt"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034549     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsdv\" (UniqueName: \"kubernetes.io/projected/ba819018-1e9f-492a-8282-cbb1801bf72e-kube-api-access-vbsdv\") pod \"kubernetes-dashboard-855c9754f9-rvlmt\" (UID: \"ba819018-1e9f-492a-8282-cbb1801bf72e\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rvlmt"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034572     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/18538be9-2dd3-4ea1-890e-d78a3d24eff0-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-62dbw\" (UID: \"18538be9-2dd3-4ea1-890e-d78a3d24eff0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw"
	Nov 23 09:08:26 embed-certs-529341 kubelet[733]: I1123 09:08:26.034589     733 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thcgh\" (UniqueName: \"kubernetes.io/projected/18538be9-2dd3-4ea1-890e-d78a3d24eff0-kube-api-access-thcgh\") pod \"dashboard-metrics-scraper-6ffb444bf9-62dbw\" (UID: \"18538be9-2dd3-4ea1-890e-d78a3d24eff0\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw"
	Nov 23 09:08:28 embed-certs-529341 kubelet[733]: I1123 09:08:28.405827     733 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 23 09:08:34 embed-certs-529341 kubelet[733]: I1123 09:08:34.223930     733 scope.go:117] "RemoveContainer" containerID="2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671"
	Nov 23 09:08:34 embed-certs-529341 kubelet[733]: I1123 09:08:34.242456     733 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-rvlmt" podStartSLOduration=4.60619963 podStartE2EDuration="9.242206992s" podCreationTimestamp="2025-11-23 09:08:25 +0000 UTC" firstStartedPulling="2025-11-23 09:08:26.172474316 +0000 UTC m=+7.130273785" lastFinishedPulling="2025-11-23 09:08:30.808481664 +0000 UTC m=+11.766281147" observedRunningTime="2025-11-23 09:08:31.26376155 +0000 UTC m=+12.221561037" watchObservedRunningTime="2025-11-23 09:08:34.242206992 +0000 UTC m=+15.200006479"
	Nov 23 09:08:35 embed-certs-529341 kubelet[733]: I1123 09:08:35.227788     733 scope.go:117] "RemoveContainer" containerID="2d86c9903acaaee33c5f8d79a63a3e1ae843cbdfaddabca4d3e3d23a8f161671"
	Nov 23 09:08:35 embed-certs-529341 kubelet[733]: I1123 09:08:35.227944     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:35 embed-certs-529341 kubelet[733]: E1123 09:08:35.228175     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:08:36 embed-certs-529341 kubelet[733]: I1123 09:08:36.232446     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:36 embed-certs-529341 kubelet[733]: E1123 09:08:36.232680     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:08:44 embed-certs-529341 kubelet[733]: I1123 09:08:44.167022     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:44 embed-certs-529341 kubelet[733]: E1123 09:08:44.167208     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: I1123 09:08:55.139201     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: I1123 09:08:55.283695     733 scope.go:117] "RemoveContainer" containerID="9c33a6395eda255ea3f642c2ca8136f16c8016d9b93e27c71b0c0a735afccab2"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: I1123 09:08:55.283909     733 scope.go:117] "RemoveContainer" containerID="68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52"
	Nov 23 09:08:55 embed-certs-529341 kubelet[733]: E1123 09:08:55.284129     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:09:04 embed-certs-529341 kubelet[733]: I1123 09:09:04.166844     733 scope.go:117] "RemoveContainer" containerID="68b41eb209db97fed6d1c6b8bca8594140644f8fd87b6b346514d4238db1ac52"
	Nov 23 09:09:04 embed-certs-529341 kubelet[733]: E1123 09:09:04.167077     733 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-62dbw_kubernetes-dashboard(18538be9-2dd3-4ea1-890e-d78a3d24eff0)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-62dbw" podUID="18538be9-2dd3-4ea1-890e-d78a3d24eff0"
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:09:12 embed-certs-529341 systemd[1]: kubelet.service: Consumed 1.686s CPU time.
	
	
	==> kubernetes-dashboard [1fcc5add6e61a5d923fcf319bbde8c2bbb3114452f9be5a89a324af683e58bd4] <==
	2025/11/23 09:08:30 Using namespace: kubernetes-dashboard
	2025/11/23 09:08:30 Using in-cluster config to connect to apiserver
	2025/11/23 09:08:30 Using secret token for csrf signing
	2025/11/23 09:08:30 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:08:30 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:08:30 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:08:30 Generating JWE encryption key
	2025/11/23 09:08:30 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:08:30 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:08:31 Initializing JWE encryption key from synchronized object
	2025/11/23 09:08:31 Creating in-cluster Sidecar client
	2025/11/23 09:08:31 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:31 Serving insecurely on HTTP port: 9090
	2025/11/23 09:09:01 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:30 Starting overwatch
	
	
	==> storage-provisioner [55ae874f44dc2af85018340071210dea3f20e2e8d9f97c97c756c4502243dc3e] <==
	W1123 09:08:52.799513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:54.803640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:54.809428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:56.813344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:56.818029       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:58.821592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:08:58.826680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:00.829088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:00.832812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:02.836285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:02.841026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:04.844172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:04.849036       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:06.854383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:06.858625       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:08.861930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:08.866111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:10.870117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:10.877345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:12.880866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:12.884696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:14.888935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:14.893446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:16.897526       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:16.902870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f2040fd9793ad37371368b566e46ecdaef1bdd733df18fb23cc2af7669381df4] <==
	I1123 09:08:22.531190       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:08:22.535129       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-529341 -n embed-certs-529341
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-529341 -n embed-certs-529341: exit status 2 (413.365668ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-529341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (5.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.92s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-531046 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-531046 --alsologtostderr -v=1: exit status 80 (2.325515478s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-531046 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:09:20.645782  432810 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:20.646065  432810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:20.646077  432810 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:20.646084  432810 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:20.646311  432810 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:20.646582  432810 out.go:368] Setting JSON to false
	I1123 09:09:20.646611  432810 mustload.go:66] Loading cluster: newest-cni-531046
	I1123 09:09:20.647080  432810 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:20.647529  432810 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:20.667986  432810 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:20.668300  432810 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:20.729192  432810 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:20.71777389 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:20.729821  432810 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-531046 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:09:20.731895  432810 out.go:179] * Pausing node newest-cni-531046 ... 
	I1123 09:09:20.733279  432810 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:20.733585  432810 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:20.733646  432810 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:20.753941  432810 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:20.854784  432810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:20.867052  432810 pause.go:52] kubelet running: true
	I1123 09:09:20.867124  432810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:20.995143  432810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:20.995262  432810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:21.064731  432810 cri.go:89] found id: "edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b"
	I1123 09:09:21.064760  432810 cri.go:89] found id: "49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6"
	I1123 09:09:21.064764  432810 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:21.064768  432810 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:21.064771  432810 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:21.064776  432810 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:21.064779  432810 cri.go:89] found id: ""
	I1123 09:09:21.064844  432810 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:21.076866  432810 retry.go:31] will retry after 299.426705ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:21.377155  432810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:21.390205  432810 pause.go:52] kubelet running: false
	I1123 09:09:21.390265  432810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:21.508375  432810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:21.508442  432810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:21.574428  432810 cri.go:89] found id: "edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b"
	I1123 09:09:21.574448  432810 cri.go:89] found id: "49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6"
	I1123 09:09:21.574452  432810 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:21.574455  432810 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:21.574458  432810 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:21.574470  432810 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:21.574474  432810 cri.go:89] found id: ""
	I1123 09:09:21.574511  432810 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:21.586132  432810 retry.go:31] will retry after 451.191772ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:21Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:22.038439  432810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:22.051431  432810 pause.go:52] kubelet running: false
	I1123 09:09:22.051503  432810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:22.160554  432810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:22.160646  432810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:22.227769  432810 cri.go:89] found id: "edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b"
	I1123 09:09:22.227799  432810 cri.go:89] found id: "49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6"
	I1123 09:09:22.227807  432810 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:22.227813  432810 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:22.227817  432810 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:22.227823  432810 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:22.227828  432810 cri.go:89] found id: ""
	I1123 09:09:22.227868  432810 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:22.239590  432810 retry.go:31] will retry after 443.028643ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:22Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:22.683230  432810 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:22.697098  432810 pause.go:52] kubelet running: false
	I1123 09:09:22.697161  432810 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:22.814392  432810 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:22.814458  432810 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:22.881588  432810 cri.go:89] found id: "edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b"
	I1123 09:09:22.881615  432810 cri.go:89] found id: "49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6"
	I1123 09:09:22.881620  432810 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:22.881624  432810 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:22.881626  432810 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:22.881635  432810 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:22.881638  432810 cri.go:89] found id: ""
	I1123 09:09:22.881681  432810 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:22.895329  432810 out.go:203] 
	W1123 09:09:22.896549  432810 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:22Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:22Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:09:22.896567  432810 out.go:285] * 
	* 
	W1123 09:09:22.900576  432810 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:09:22.901803  432810 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-531046 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-531046
helpers_test.go:243: (dbg) docker inspect newest-cni-531046:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d",
	        "Created": "2025-11-23T09:08:44.244823038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 428920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:09:09.64625841Z",
	            "FinishedAt": "2025-11-23T09:09:08.750291872Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/hosts",
	        "LogPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d-json.log",
	        "Name": "/newest-cni-531046",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-531046:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-531046",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d",
	                "LowerDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-531046",
	                "Source": "/var/lib/docker/volumes/newest-cni-531046/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-531046",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-531046",
	                "name.minikube.sigs.k8s.io": "newest-cni-531046",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cb8d1c1cf86e29c0717780c194c6012a359c1940f9080bc2fd6a45844072f6bf",
	            "SandboxKey": "/var/run/docker/netns/cb8d1c1cf86e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-531046": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "544bf9b9ea19f870a2f79e0c461f820624a157b8c35e72ac8d0afba61525282f",
	                    "EndpointID": "cae0985d1e628d1ba5fb0d0da971f033d83e291624ca2fd6db4321b979be82ea",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "46:82:f3:0e:9a:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-531046",
	                        "7ad7518812cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046: exit status 2 (329.984029ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531046 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-602386 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-531046 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-531046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ embed-certs-529341 image list --format=json                                                                                                                                                                                                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p embed-certs-529341 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ newest-cni-531046 image list --format=json                                                                                                                                                                                                    │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p newest-cni-531046 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:09:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:09:09.393949  428718 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:09.394192  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394201  428718 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:09.394206  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394406  428718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:09.394917  428718 out.go:368] Setting JSON to false
	I1123 09:09:09.396361  428718 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6689,"bootTime":1763882260,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:09:09.396420  428718 start.go:143] virtualization: kvm guest
	I1123 09:09:09.398144  428718 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:09:09.399754  428718 notify.go:221] Checking for updates...
	I1123 09:09:09.399766  428718 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:09:09.402731  428718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:09:09.404051  428718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:09.405353  428718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:09:09.406721  428718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:09:09.408298  428718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:09:09.410076  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:09.410631  428718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:09:09.438677  428718 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:09:09.438842  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.499289  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.488360013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.499392  428718 docker.go:319] overlay module found
	I1123 09:09:09.501298  428718 out.go:179] * Using the docker driver based on existing profile
	I1123 09:09:09.502521  428718 start.go:309] selected driver: docker
	I1123 09:09:09.502539  428718 start.go:927] validating driver "docker" against &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.502628  428718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:09:09.503156  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.567159  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.555013229 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.567643  428718 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:09.567695  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:09.567768  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:09.567832  428718 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.569790  428718 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:09:09.570956  428718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:09:09.573142  428718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:09:09.574347  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:09.574385  428718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:09:09.574403  428718 cache.go:65] Caching tarball of preloaded images
	I1123 09:09:09.574469  428718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:09:09.574518  428718 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:09:09.574535  428718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:09:09.574672  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.596348  428718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:09:09.596375  428718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:09:09.596395  428718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:09:09.596441  428718 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:09:09.596513  428718 start.go:364] duration metric: took 46.31µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:09:09.596535  428718 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:09:09.596546  428718 fix.go:54] fixHost starting: 
	I1123 09:09:09.596775  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.615003  428718 fix.go:112] recreateIfNeeded on newest-cni-531046: state=Stopped err=<nil>
	W1123 09:09:09.615044  428718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:09:10.962211  416838 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:09:10.962238  416838 pod_ready.go:86] duration metric: took 41.505811079s for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.964724  416838 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.968263  416838 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.968282  416838 pod_ready.go:86] duration metric: took 3.536222ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.969953  416838 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.973341  416838 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.973358  416838 pod_ready.go:86] duration metric: took 3.359803ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.975266  416838 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.160920  416838 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:11.160945  416838 pod_ready.go:86] duration metric: took 185.660534ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.361102  416838 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.760631  416838 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:09:11.760661  416838 pod_ready.go:86] duration metric: took 399.534821ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.961014  416838 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360788  416838 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:12.360818  416838 pod_ready.go:86] duration metric: took 399.779479ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360830  416838 pod_ready.go:40] duration metric: took 42.908765939s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:09:12.404049  416838 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:12.405650  416838 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	I1123 09:09:09.616814  428718 out.go:252] * Restarting existing docker container for "newest-cni-531046" ...
	I1123 09:09:09.616880  428718 cli_runner.go:164] Run: docker start newest-cni-531046
	I1123 09:09:09.907672  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.927111  428718 kic.go:430] container "newest-cni-531046" state is running.
	I1123 09:09:09.927497  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:09.947618  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.947894  428718 machine.go:94] provisionDockerMachine start ...
	I1123 09:09:09.948010  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:09.972117  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:09.972394  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:09.972403  428718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:09:09.973126  428718 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56888->127.0.0.1:33133: read: connection reset by peer
	I1123 09:09:13.118820  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.118862  428718 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:09:13.118924  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.137403  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.137732  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.137754  428718 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:09:13.292448  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.292567  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.312639  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.312883  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.312902  428718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:09:13.456742  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:09:13.456786  428718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:09:13.456823  428718 ubuntu.go:190] setting up certificates
	I1123 09:09:13.456836  428718 provision.go:84] configureAuth start
	I1123 09:09:13.456907  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:13.476479  428718 provision.go:143] copyHostCerts
	I1123 09:09:13.476551  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:09:13.476578  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:09:13.476667  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:09:13.476821  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:09:13.476836  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:09:13.476878  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:09:13.476962  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:09:13.476997  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:09:13.477040  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:09:13.477127  428718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:09:13.551036  428718 provision.go:177] copyRemoteCerts
	I1123 09:09:13.551092  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:09:13.551131  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.570388  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:13.674461  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:09:13.692480  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:09:13.711416  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:09:13.728169  428718 provision.go:87] duration metric: took 271.314005ms to configureAuth
	I1123 09:09:13.728202  428718 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:09:13.728420  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:13.728554  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.747174  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.747495  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.747521  428718 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:09:14.068767  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:09:14.068799  428718 machine.go:97] duration metric: took 4.120887468s to provisionDockerMachine
	I1123 09:09:14.068814  428718 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:09:14.068829  428718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:09:14.068900  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:09:14.068945  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.088061  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.190042  428718 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:09:14.193920  428718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:09:14.193952  428718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:09:14.193975  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:09:14.194042  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:09:14.194148  428718 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:09:14.194286  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:09:14.202503  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:14.221567  428718 start.go:296] duration metric: took 152.735823ms for postStartSetup
	I1123 09:09:14.221638  428718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:09:14.221678  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.241073  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.341192  428718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:09:14.345736  428718 fix.go:56] duration metric: took 4.749184186s for fixHost
	I1123 09:09:14.345761  428718 start.go:83] releasing machines lock for "newest-cni-531046", held for 4.749236041s
	I1123 09:09:14.345829  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:14.367424  428718 ssh_runner.go:195] Run: cat /version.json
	I1123 09:09:14.367491  428718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:09:14.367498  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.367566  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.387208  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.388547  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.489744  428718 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:14.553172  428718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:09:14.597710  428718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:09:14.603833  428718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:09:14.603919  428718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:09:14.613685  428718 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:09:14.613716  428718 start.go:496] detecting cgroup driver to use...
	I1123 09:09:14.613753  428718 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:09:14.613814  428718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:09:14.633265  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:09:14.647148  428718 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:09:14.647207  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:09:14.663589  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:09:14.677157  428718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:09:14.766215  428718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:09:14.858401  428718 docker.go:234] disabling docker service ...
	I1123 09:09:14.858470  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:09:14.873312  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:09:14.888170  428718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:09:14.983215  428718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:09:15.073382  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:09:15.086608  428718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:09:15.101866  428718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:09:15.101935  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.111226  428718 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:09:15.111288  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.120834  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.130549  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.140695  428718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:09:15.148854  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.157864  428718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.166336  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.176067  428718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:09:15.183505  428718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:09:15.191000  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.295741  428718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:09:15.433605  428718 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:09:15.433681  428718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:09:15.439424  428718 start.go:564] Will wait 60s for crictl version
	I1123 09:09:15.439490  428718 ssh_runner.go:195] Run: which crictl
	I1123 09:09:15.444124  428718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:09:15.469766  428718 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:09:15.469843  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.500595  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.539580  428718 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:09:15.540673  428718 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:09:15.559666  428718 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:09:15.564697  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.581138  428718 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 09:09:15.582462  428718 kubeadm.go:884] updating cluster {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:09:15.582650  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:15.582727  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.616458  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.616482  428718 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:09:15.616540  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.642742  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.642763  428718 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:09:15.642771  428718 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:09:15.642861  428718 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-531046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:09:15.642928  428718 ssh_runner.go:195] Run: crio config
	I1123 09:09:15.691553  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:15.691572  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:15.691591  428718 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 09:09:15.691621  428718 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-531046 NodeName:newest-cni-531046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:09:15.691777  428718 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-531046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:09:15.691843  428718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:09:15.700340  428718 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:09:15.700413  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:09:15.710236  428718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:09:15.727317  428718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:09:15.743376  428718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 09:09:15.758455  428718 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:09:15.762936  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.773856  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.864228  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:15.886692  428718 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046 for IP: 192.168.76.2
	I1123 09:09:15.886715  428718 certs.go:195] generating shared ca certs ...
	I1123 09:09:15.886734  428718 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:15.886911  428718 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:09:15.886986  428718 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:09:15.887002  428718 certs.go:257] generating profile certs ...
	I1123 09:09:15.887116  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key
	I1123 09:09:15.887192  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be
	I1123 09:09:15.887245  428718 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key
	I1123 09:09:15.887384  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:09:15.887428  428718 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:09:15.887442  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:09:15.887489  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:09:15.887522  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:09:15.887550  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:09:15.887610  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:15.888391  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:09:15.908489  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:09:15.931840  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:09:15.955677  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:09:15.980595  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:09:16.003555  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:09:16.021453  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:09:16.038502  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:09:16.055883  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:09:16.072577  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:09:16.090199  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:09:16.108367  428718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:09:16.122045  428718 ssh_runner.go:195] Run: openssl version
	I1123 09:09:16.128705  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:09:16.136943  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140531  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140588  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.178739  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:09:16.187754  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:09:16.195960  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199816  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199868  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.237427  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:09:16.246469  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:09:16.255027  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258823  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258886  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.299069  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:09:16.308045  428718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:09:16.312321  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:09:16.349349  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:09:16.387826  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:09:16.435139  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:09:16.482951  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:09:16.533236  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:09:16.591746  428718 kubeadm.go:401] StartCluster: {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:16.591897  428718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:09:16.592012  428718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:09:16.623916  428718 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:16.623942  428718 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:16.623948  428718 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:16.623952  428718 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:16.623956  428718 cri.go:89] found id: ""
	I1123 09:09:16.624037  428718 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:09:16.637501  428718 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:16.637584  428718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:09:16.647076  428718 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:09:16.647101  428718 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:09:16.647174  428718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:09:16.656920  428718 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:09:16.658079  428718 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-531046" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.658732  428718 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-531046" cluster setting kubeconfig missing "newest-cni-531046" context setting]
	I1123 09:09:16.659991  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.661957  428718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:09:16.670780  428718 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 09:09:16.670810  428718 kubeadm.go:602] duration metric: took 23.701311ms to restartPrimaryControlPlane
	I1123 09:09:16.670821  428718 kubeadm.go:403] duration metric: took 79.16679ms to StartCluster
	I1123 09:09:16.670837  428718 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.670894  428718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.673044  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.673289  428718 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:09:16.673479  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:16.673459  428718 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:09:16.673557  428718 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-531046"
	I1123 09:09:16.673580  428718 addons.go:70] Setting dashboard=true in profile "newest-cni-531046"
	I1123 09:09:16.673603  428718 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-531046"
	I1123 09:09:16.673610  428718 addons.go:239] Setting addon dashboard=true in "newest-cni-531046"
	W1123 09:09:16.673613  428718 addons.go:248] addon storage-provisioner should already be in state true
	W1123 09:09:16.673619  428718 addons.go:248] addon dashboard should already be in state true
	I1123 09:09:16.673619  428718 addons.go:70] Setting default-storageclass=true in profile "newest-cni-531046"
	I1123 09:09:16.673637  428718 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-531046"
	I1123 09:09:16.673641  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673653  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673957  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674200  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674201  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674767  428718 out.go:179] * Verifying Kubernetes components...
	I1123 09:09:16.675943  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:16.701001  428718 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:09:16.702065  428718 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.702082  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:09:16.702722  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.703253  428718 addons.go:239] Setting addon default-storageclass=true in "newest-cni-531046"
	W1123 09:09:16.703273  428718 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:09:16.703305  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.703772  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.704323  428718 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:09:16.705829  428718 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:09:16.706914  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:09:16.706958  428718 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:09:16.707051  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.741059  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.742145  428718 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.742209  428718 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:09:16.742331  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.744371  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.772639  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.838556  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:16.855010  428718 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:09:16.855122  428718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:09:16.868146  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.869823  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:09:16.869853  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:09:16.872718  428718 api_server.go:72] duration metric: took 199.388215ms to wait for apiserver process to appear ...
	I1123 09:09:16.872738  428718 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:09:16.872782  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:16.887859  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:09:16.887883  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:09:16.904333  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.909029  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:09:16.909058  428718 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:09:16.927238  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:09:16.927274  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:09:16.948202  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:09:16.948230  428718 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:09:16.968718  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:09:16.968755  428718 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:09:16.986286  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:09:16.986318  428718 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:09:17.003049  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:09:17.003130  428718 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:09:17.018884  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:17.018911  428718 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:09:17.034757  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:18.395495  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.395530  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.395546  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.409704  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.409739  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.873245  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.877442  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:18.877468  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:18.924122  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055941929s)
	I1123 09:09:18.924171  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019794928s)
	I1123 09:09:18.924270  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889470808s)
	I1123 09:09:18.926158  428718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-531046 addons enable metrics-server
	
	I1123 09:09:18.934451  428718 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:09:18.935583  428718 addons.go:530] duration metric: took 2.262123063s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 09:09:19.373799  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.378037  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:19.378064  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:19.873454  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.878905  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:09:19.879992  428718 api_server.go:141] control plane version: v1.34.1
	I1123 09:09:19.880021  428718 api_server.go:131] duration metric: took 3.007275014s to wait for apiserver health ...
	I1123 09:09:19.880032  428718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:09:19.883383  428718 system_pods.go:59] 8 kube-system pods found
	I1123 09:09:19.883415  428718 system_pods.go:61] "coredns-66bc5c9577-gk265" [0216f458-438b-4260-8320-f81fb2a01fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883422  428718 system_pods.go:61] "etcd-newest-cni-531046" [1003fb1b-b28b-499c-b1e6-5c8b3d23d4bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:09:19.883428  428718 system_pods.go:61] "kindnet-pbp7c" [72da9944-1b43-4f59-b27a-78a6ebd8f3dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:09:19.883437  428718 system_pods.go:61] "kube-apiserver-newest-cni-531046" [92975545-d846-4326-9cc5-cf12a61f834b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:09:19.883445  428718 system_pods.go:61] "kube-controller-manager-newest-cni-531046" [769616d3-3a60-45b1-9246-80ccba447cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:09:19.883460  428718 system_pods.go:61] "kube-proxy-4bpzx" [a0812143-d250-4445-85b7-dc7d4dbb23ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:09:19.883468  428718 system_pods.go:61] "kube-scheduler-newest-cni-531046" [f713d5f5-1579-48f4-b2f3-9340bfc94c84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:09:19.883479  428718 system_pods.go:61] "storage-provisioner" [d15b527f-4a7d-4cd4-bd83-5f0ec906909f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883485  428718 system_pods.go:74] duration metric: took 3.447563ms to wait for pod list to return data ...
	I1123 09:09:19.883492  428718 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:09:19.886038  428718 default_sa.go:45] found service account: "default"
	I1123 09:09:19.886055  428718 default_sa.go:55] duration metric: took 2.555301ms for default service account to be created ...
	I1123 09:09:19.886067  428718 kubeadm.go:587] duration metric: took 3.212741373s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:19.886084  428718 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:09:19.888475  428718 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:09:19.888510  428718 node_conditions.go:123] node cpu capacity is 8
	I1123 09:09:19.888527  428718 node_conditions.go:105] duration metric: took 2.434606ms to run NodePressure ...
	I1123 09:09:19.888549  428718 start.go:242] waiting for startup goroutines ...
	I1123 09:09:19.888563  428718 start.go:247] waiting for cluster config update ...
	I1123 09:09:19.888578  428718 start.go:256] writing updated cluster config ...
	I1123 09:09:19.888867  428718 ssh_runner.go:195] Run: rm -f paused
	I1123 09:09:19.937632  428718 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:19.945384  428718 out.go:179] * Done! kubectl is now configured to use "newest-cni-531046" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.271490312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.274897955Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e5e1c26b-0364-441b-9faf-63318ebbc36c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.275681084Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=12139da1-010b-4656-ab09-dcea6d0ef3d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.276371133Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.277094522Z" level=info msg="Ran pod sandbox 5127df5706a5bb844434d60a2815c8fab5d3aecf7aef87f538ccd1d3d4a7ad8c with infra container: kube-system/kindnet-pbp7c/POD" id=e5e1c26b-0364-441b-9faf-63318ebbc36c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.277297562Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.278238609Z" level=info msg="Ran pod sandbox 8c7cc31914601b5bb0fb0efbdaa827f9d440e1e4fadc7540ab3139122ed5a602 with infra container: kube-system/kube-proxy-4bpzx/POD" id=12139da1-010b-4656-ab09-dcea6d0ef3d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.278553832Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e0d8f97a-5f18-48f0-b57a-9e5780f6ef90 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.280661658Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5bac86f3-68b4-40a3-b4ae-f64b1df9ab94 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.280748019Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=84ef2cb8-47b1-4cc3-8983-702fa7b46c64 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.281603051Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f7e42029-8f67-40f9-a04b-c563f0944c16 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.281800569Z" level=info msg="Creating container: kube-system/kindnet-pbp7c/kindnet-cni" id=4c8a5bcb-33d6-4f56-ae86-4dc498a35b4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.281870526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.282529509Z" level=info msg="Creating container: kube-system/kube-proxy-4bpzx/kube-proxy" id=f0d14f4e-ccfd-4aed-a916-50e9d63a718b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.282644687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.285869229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.286371119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.288607701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.289099414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.31317035Z" level=info msg="Created container 49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6: kube-system/kindnet-pbp7c/kindnet-cni" id=4c8a5bcb-33d6-4f56-ae86-4dc498a35b4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.313801329Z" level=info msg="Starting container: 49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6" id=0651be80-02c7-46a8-be7f-83b179f765b4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.31535771Z" level=info msg="Started container" PID=1052 containerID=49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6 description=kube-system/kindnet-pbp7c/kindnet-cni id=0651be80-02c7-46a8-be7f-83b179f765b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5127df5706a5bb844434d60a2815c8fab5d3aecf7aef87f538ccd1d3d4a7ad8c
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.316099758Z" level=info msg="Created container edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b: kube-system/kube-proxy-4bpzx/kube-proxy" id=f0d14f4e-ccfd-4aed-a916-50e9d63a718b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.31661292Z" level=info msg="Starting container: edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b" id=2d5b43a9-41aa-4caf-bf98-53aacd5762f1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.3192491Z" level=info msg="Started container" PID=1053 containerID=edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b description=kube-system/kube-proxy-4bpzx/kube-proxy id=2d5b43a9-41aa-4caf-bf98-53aacd5762f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c7cc31914601b5bb0fb0efbdaa827f9d440e1e4fadc7540ab3139122ed5a602
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	edc56583186f2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   4 seconds ago       Running             kube-proxy                1                   8c7cc31914601       kube-proxy-4bpzx                            kube-system
	49f6ca7e606fe       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   4 seconds ago       Running             kindnet-cni               1                   5127df5706a5b       kindnet-pbp7c                               kube-system
	b8d492ab9433e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   b0893a49d6be7       kube-scheduler-newest-cni-531046            kube-system
	0349a0b9c0911       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   ab2bcdb421604       kube-controller-manager-newest-cni-531046   kube-system
	6a43edcb0ace5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   546863202fe60       kube-apiserver-newest-cni-531046            kube-system
	4e62ba6501972       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   edcba67ad8509       etcd-newest-cni-531046                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-531046
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-531046
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=newest-cni-531046
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_08_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:08:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-531046
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:09:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-531046
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                269c937c-ad30-473c-998a-d61087f9e09b
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-531046                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-pbp7c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      20s
	  kube-system                 kube-apiserver-newest-cni-531046             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-531046    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-4bpzx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         20s
	  kube-system                 kube-scheduler-newest-cni-531046             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19s                kube-proxy       
	  Normal  Starting                 4s                 kube-proxy       
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node newest-cni-531046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet          Node newest-cni-531046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x8 over 30s)  kubelet          Node newest-cni-531046 status is now: NodeHasSufficientPID
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s                kubelet          Node newest-cni-531046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s                kubelet          Node newest-cni-531046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s                kubelet          Node newest-cni-531046 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           21s                node-controller  Node newest-cni-531046 event: Registered Node newest-cni-531046 in Controller
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-531046 event: Registered Node newest-cni-531046 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0] <==
	{"level":"warn","ts":"2025-11-23T09:09:17.749617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.758834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.766435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.772518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.779540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.787271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.794078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.800070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.807372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.815102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.822508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.829593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.836729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.843544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.851213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.858080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.865567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.872575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.879508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.887268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.895223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.908892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.916022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.923805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.977912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35848","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:23 up  1:51,  0 user,  load average: 4.92, 4.51, 2.94
	Linux newest-cni-531046 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6] <==
	I1123 09:09:19.521595       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:09:19.521803       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:09:19.521928       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:09:19.521949       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:09:19.521961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:09:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:09:19.771233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:09:19.771782       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:09:19.771839       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:09:19.771961       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:09:20.118631       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:09:20.118704       1 metrics.go:72] Registering metrics
	I1123 09:09:20.118831       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d] <==
	I1123 09:09:18.477747       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:09:18.477755       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:09:18.477762       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:09:18.477864       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:09:18.477911       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:09:18.478610       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:09:18.482918       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:09:18.482949       1 policy_source.go:240] refreshing policies
	E1123 09:09:18.484042       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:09:18.484110       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:09:18.488307       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:09:18.530564       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:09:18.530744       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:09:18.749324       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:09:18.779168       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:09:18.798312       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:09:18.804466       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:09:18.811278       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:09:18.840084       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.234.181"}
	I1123 09:09:18.849249       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.142.213"}
	I1123 09:09:19.380752       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:09:21.821287       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:09:22.174514       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:09:22.370961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:09:22.421478       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b] <==
	I1123 09:09:21.785114       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:09:21.795359       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:09:21.801696       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:09:21.801711       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:09:21.801718       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:09:21.803837       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:09:21.818682       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:09:21.818716       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:09:21.818724       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:09:21.818669       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:09:21.818795       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:09:21.818803       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:09:21.818829       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:09:21.818866       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:09:21.819002       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:09:21.819006       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:09:21.820884       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:09:21.823164       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:09:21.823209       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:09:21.825730       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:09:21.827471       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:09:21.830703       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:09:21.833014       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:09:21.834523       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:09:21.842872       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b] <==
	I1123 09:09:19.352265       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:09:19.409458       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:09:19.510113       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:09:19.510144       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:09:19.510224       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:09:19.534117       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:09:19.534177       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:09:19.539680       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:09:19.540111       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:09:19.540140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:09:19.541745       1 config.go:200] "Starting service config controller"
	I1123 09:09:19.541827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:09:19.541866       1 config.go:309] "Starting node config controller"
	I1123 09:09:19.541891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:09:19.541960       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:09:19.541998       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:09:19.542046       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:09:19.542065       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:09:19.642439       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:09:19.642484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:09:19.642569       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:09:19.642582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df] <==
	I1123 09:09:17.203027       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:09:18.407367       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:09:18.407430       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:09:18.407444       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:09:18.407455       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:09:18.447583       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:09:18.447615       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:09:18.450436       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:09:18.450489       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:09:18.451503       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:09:18.451601       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:09:18.551371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.009715     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-531046\" not found" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.010014     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-531046\" not found" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.468696     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.534532     676 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.534619     676 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.534653     676 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.535540     676 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.578481     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531046\" already exists" pod="kube-system/etcd-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.578520     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.585702     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531046\" already exists" pod="kube-system/kube-apiserver-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.585743     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.592328     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-531046\" already exists" pod="kube-system/kube-controller-manager-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.592361     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.596836     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-531046\" already exists" pod="kube-system/kube-scheduler-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.964188     676 apiserver.go:52] "Watching apiserver"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.968943     676 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038563     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-lib-modules\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038613     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0812143-d250-4445-85b7-dc7d4dbb23ad-xtables-lock\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038637     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0812143-d250-4445-85b7-dc7d4dbb23ad-lib-modules\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038694     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-cni-cfg\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038730     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-xtables-lock\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:20 newest-cni-531046 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:09:20 newest-cni-531046 kubelet[676]: I1123 09:09:20.972832     676 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 09:09:20 newest-cni-531046 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:09:20 newest-cni-531046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531046 -n newest-cni-531046
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531046 -n newest-cni-531046: exit status 2 (332.229157ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-531046 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f: exit status 1 (65.367749ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gk265" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-s9dpj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bxx6f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-531046
helpers_test.go:243: (dbg) docker inspect newest-cni-531046:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d",
	        "Created": "2025-11-23T09:08:44.244823038Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 428920,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:09:09.64625841Z",
	            "FinishedAt": "2025-11-23T09:09:08.750291872Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/hostname",
	        "HostsPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/hosts",
	        "LogPath": "/var/lib/docker/containers/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d/7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d-json.log",
	        "Name": "/newest-cni-531046",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "newest-cni-531046:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-531046",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7ad7518812cf34e404a70b8ee996e1d79bb5f390569d392d15236f2eb4c5d18d",
	                "LowerDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a6b8cbeab294cec452e6084f26224fb1434adf265da8070f9f1f559341474ade/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-531046",
	                "Source": "/var/lib/docker/volumes/newest-cni-531046/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-531046",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-531046",
	                "name.minikube.sigs.k8s.io": "newest-cni-531046",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "cb8d1c1cf86e29c0717780c194c6012a359c1940f9080bc2fd6a45844072f6bf",
	            "SandboxKey": "/var/run/docker/netns/cb8d1c1cf86e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-531046": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "544bf9b9ea19f870a2f79e0c461f820624a157b8c35e72ac8d0afba61525282f",
	                    "EndpointID": "cae0985d1e628d1ba5fb0d0da971f033d83e291624ca2fd6db4321b979be82ea",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "MacAddress": "46:82:f3:0e:9a:62",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-531046",
	                        "7ad7518812cf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046: exit status 2 (351.84162ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531046 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p default-k8s-diff-port-602386 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-531046 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-531046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ embed-certs-529341 image list --format=json                                                                                                                                                                                                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p embed-certs-529341 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ newest-cni-531046 image list --format=json                                                                                                                                                                                                    │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p newest-cni-531046 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ image   │ default-k8s-diff-port-602386 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:09:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:09:09.393949  428718 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:09.394192  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394201  428718 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:09.394206  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394406  428718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:09.394917  428718 out.go:368] Setting JSON to false
	I1123 09:09:09.396361  428718 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6689,"bootTime":1763882260,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:09:09.396420  428718 start.go:143] virtualization: kvm guest
	I1123 09:09:09.398144  428718 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:09:09.399754  428718 notify.go:221] Checking for updates...
	I1123 09:09:09.399766  428718 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:09:09.402731  428718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:09:09.404051  428718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:09.405353  428718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:09:09.406721  428718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:09:09.408298  428718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:09:09.410076  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:09.410631  428718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:09:09.438677  428718 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:09:09.438842  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.499289  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.488360013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.499392  428718 docker.go:319] overlay module found
	I1123 09:09:09.501298  428718 out.go:179] * Using the docker driver based on existing profile
	I1123 09:09:09.502521  428718 start.go:309] selected driver: docker
	I1123 09:09:09.502539  428718 start.go:927] validating driver "docker" against &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.502628  428718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:09:09.503156  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.567159  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.555013229 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.567643  428718 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:09.567695  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:09.567768  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:09.567832  428718 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.569790  428718 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:09:09.570956  428718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:09:09.573142  428718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:09:09.574347  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:09.574385  428718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:09:09.574403  428718 cache.go:65] Caching tarball of preloaded images
	I1123 09:09:09.574469  428718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:09:09.574518  428718 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:09:09.574535  428718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:09:09.574672  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.596348  428718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:09:09.596375  428718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:09:09.596395  428718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:09:09.596441  428718 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:09:09.596513  428718 start.go:364] duration metric: took 46.31µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:09:09.596535  428718 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:09:09.596546  428718 fix.go:54] fixHost starting: 
	I1123 09:09:09.596775  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.615003  428718 fix.go:112] recreateIfNeeded on newest-cni-531046: state=Stopped err=<nil>
	W1123 09:09:09.615044  428718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:09:10.962211  416838 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:09:10.962238  416838 pod_ready.go:86] duration metric: took 41.505811079s for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.964724  416838 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.968263  416838 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.968282  416838 pod_ready.go:86] duration metric: took 3.536222ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.969953  416838 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.973341  416838 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.973358  416838 pod_ready.go:86] duration metric: took 3.359803ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.975266  416838 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.160920  416838 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:11.160945  416838 pod_ready.go:86] duration metric: took 185.660534ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.361102  416838 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.760631  416838 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:09:11.760661  416838 pod_ready.go:86] duration metric: took 399.534821ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.961014  416838 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360788  416838 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:12.360818  416838 pod_ready.go:86] duration metric: took 399.779479ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360830  416838 pod_ready.go:40] duration metric: took 42.908765939s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:09:12.404049  416838 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:12.405650  416838 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	I1123 09:09:09.616814  428718 out.go:252] * Restarting existing docker container for "newest-cni-531046" ...
	I1123 09:09:09.616880  428718 cli_runner.go:164] Run: docker start newest-cni-531046
	I1123 09:09:09.907672  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.927111  428718 kic.go:430] container "newest-cni-531046" state is running.
	I1123 09:09:09.927497  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:09.947618  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.947894  428718 machine.go:94] provisionDockerMachine start ...
	I1123 09:09:09.948010  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:09.972117  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:09.972394  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:09.972403  428718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:09:09.973126  428718 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56888->127.0.0.1:33133: read: connection reset by peer
	I1123 09:09:13.118820  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.118862  428718 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:09:13.118924  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.137403  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.137732  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.137754  428718 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:09:13.292448  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.292567  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.312639  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.312883  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.312902  428718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:09:13.456742  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:09:13.456786  428718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:09:13.456823  428718 ubuntu.go:190] setting up certificates
	I1123 09:09:13.456836  428718 provision.go:84] configureAuth start
	I1123 09:09:13.456907  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:13.476479  428718 provision.go:143] copyHostCerts
	I1123 09:09:13.476551  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:09:13.476578  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:09:13.476667  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:09:13.476821  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:09:13.476836  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:09:13.476878  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:09:13.476962  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:09:13.476997  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:09:13.477040  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:09:13.477127  428718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:09:13.551036  428718 provision.go:177] copyRemoteCerts
	I1123 09:09:13.551092  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:09:13.551131  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.570388  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:13.674461  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:09:13.692480  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:09:13.711416  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:09:13.728169  428718 provision.go:87] duration metric: took 271.314005ms to configureAuth
	I1123 09:09:13.728202  428718 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:09:13.728420  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:13.728554  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.747174  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.747495  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.747521  428718 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:09:14.068767  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:09:14.068799  428718 machine.go:97] duration metric: took 4.120887468s to provisionDockerMachine
	I1123 09:09:14.068814  428718 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:09:14.068829  428718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:09:14.068900  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:09:14.068945  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.088061  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.190042  428718 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:09:14.193920  428718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:09:14.193952  428718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:09:14.193975  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:09:14.194042  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:09:14.194148  428718 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:09:14.194286  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:09:14.202503  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:14.221567  428718 start.go:296] duration metric: took 152.735823ms for postStartSetup
	I1123 09:09:14.221638  428718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:09:14.221678  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.241073  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.341192  428718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:09:14.345736  428718 fix.go:56] duration metric: took 4.749184186s for fixHost
	I1123 09:09:14.345761  428718 start.go:83] releasing machines lock for "newest-cni-531046", held for 4.749236041s
	I1123 09:09:14.345829  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:14.367424  428718 ssh_runner.go:195] Run: cat /version.json
	I1123 09:09:14.367491  428718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:09:14.367498  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.367566  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.387208  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.388547  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.489744  428718 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:14.553172  428718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:09:14.597710  428718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:09:14.603833  428718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:09:14.603919  428718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:09:14.613685  428718 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:09:14.613716  428718 start.go:496] detecting cgroup driver to use...
	I1123 09:09:14.613753  428718 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:09:14.613814  428718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:09:14.633265  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:09:14.647148  428718 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:09:14.647207  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:09:14.663589  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:09:14.677157  428718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:09:14.766215  428718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:09:14.858401  428718 docker.go:234] disabling docker service ...
	I1123 09:09:14.858470  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:09:14.873312  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:09:14.888170  428718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:09:14.983215  428718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:09:15.073382  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:09:15.086608  428718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:09:15.101866  428718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:09:15.101935  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.111226  428718 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:09:15.111288  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.120834  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.130549  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.140695  428718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:09:15.148854  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.157864  428718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.166336  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.176067  428718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:09:15.183505  428718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:09:15.191000  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.295741  428718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:09:15.433605  428718 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:09:15.433681  428718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:09:15.439424  428718 start.go:564] Will wait 60s for crictl version
	I1123 09:09:15.439490  428718 ssh_runner.go:195] Run: which crictl
	I1123 09:09:15.444124  428718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:09:15.469766  428718 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:09:15.469843  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.500595  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.539580  428718 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:09:15.540673  428718 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:09:15.559666  428718 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:09:15.564697  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.581138  428718 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 09:09:15.582462  428718 kubeadm.go:884] updating cluster {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:09:15.582650  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:15.582727  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.616458  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.616482  428718 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:09:15.616540  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.642742  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.642763  428718 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:09:15.642771  428718 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:09:15.642861  428718 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-531046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:09:15.642928  428718 ssh_runner.go:195] Run: crio config
	I1123 09:09:15.691553  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:15.691572  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:15.691591  428718 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 09:09:15.691621  428718 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-531046 NodeName:newest-cni-531046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:09:15.691777  428718 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-531046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:09:15.691843  428718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:09:15.700340  428718 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:09:15.700413  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:09:15.710236  428718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:09:15.727317  428718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:09:15.743376  428718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 09:09:15.758455  428718 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:09:15.762936  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.773856  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.864228  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:15.886692  428718 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046 for IP: 192.168.76.2
	I1123 09:09:15.886715  428718 certs.go:195] generating shared ca certs ...
	I1123 09:09:15.886734  428718 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:15.886911  428718 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:09:15.886986  428718 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:09:15.887002  428718 certs.go:257] generating profile certs ...
	I1123 09:09:15.887116  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key
	I1123 09:09:15.887192  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be
	I1123 09:09:15.887245  428718 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key
	I1123 09:09:15.887384  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:09:15.887428  428718 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:09:15.887442  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:09:15.887489  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:09:15.887522  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:09:15.887550  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:09:15.887610  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:15.888391  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:09:15.908489  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:09:15.931840  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:09:15.955677  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:09:15.980595  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:09:16.003555  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:09:16.021453  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:09:16.038502  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:09:16.055883  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:09:16.072577  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:09:16.090199  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:09:16.108367  428718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:09:16.122045  428718 ssh_runner.go:195] Run: openssl version
	I1123 09:09:16.128705  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:09:16.136943  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140531  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140588  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.178739  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:09:16.187754  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:09:16.195960  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199816  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199868  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.237427  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:09:16.246469  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:09:16.255027  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258823  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258886  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.299069  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:09:16.308045  428718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:09:16.312321  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:09:16.349349  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:09:16.387826  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:09:16.435139  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:09:16.482951  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:09:16.533236  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:09:16.591746  428718 kubeadm.go:401] StartCluster: {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:16.591897  428718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:09:16.592012  428718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:09:16.623916  428718 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:16.623942  428718 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:16.623948  428718 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:16.623952  428718 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:16.623956  428718 cri.go:89] found id: ""
	I1123 09:09:16.624037  428718 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:09:16.637501  428718 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:16.637584  428718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:09:16.647076  428718 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:09:16.647101  428718 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:09:16.647174  428718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:09:16.656920  428718 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:09:16.658079  428718 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-531046" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.658732  428718 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-531046" cluster setting kubeconfig missing "newest-cni-531046" context setting]
	I1123 09:09:16.659991  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.661957  428718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:09:16.670780  428718 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 09:09:16.670810  428718 kubeadm.go:602] duration metric: took 23.701311ms to restartPrimaryControlPlane
	I1123 09:09:16.670821  428718 kubeadm.go:403] duration metric: took 79.16679ms to StartCluster
	I1123 09:09:16.670837  428718 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.670894  428718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.673044  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.673289  428718 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:09:16.673479  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:16.673459  428718 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:09:16.673557  428718 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-531046"
	I1123 09:09:16.673580  428718 addons.go:70] Setting dashboard=true in profile "newest-cni-531046"
	I1123 09:09:16.673603  428718 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-531046"
	I1123 09:09:16.673610  428718 addons.go:239] Setting addon dashboard=true in "newest-cni-531046"
	W1123 09:09:16.673613  428718 addons.go:248] addon storage-provisioner should already be in state true
	W1123 09:09:16.673619  428718 addons.go:248] addon dashboard should already be in state true
	I1123 09:09:16.673619  428718 addons.go:70] Setting default-storageclass=true in profile "newest-cni-531046"
	I1123 09:09:16.673637  428718 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-531046"
	I1123 09:09:16.673641  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673653  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673957  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674200  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674201  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674767  428718 out.go:179] * Verifying Kubernetes components...
	I1123 09:09:16.675943  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:16.701001  428718 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:09:16.702065  428718 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.702082  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:09:16.702722  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.703253  428718 addons.go:239] Setting addon default-storageclass=true in "newest-cni-531046"
	W1123 09:09:16.703273  428718 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:09:16.703305  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.703772  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.704323  428718 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:09:16.705829  428718 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:09:16.706914  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:09:16.706958  428718 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:09:16.707051  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.741059  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.742145  428718 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.742209  428718 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:09:16.742331  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.744371  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.772639  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.838556  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:16.855010  428718 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:09:16.855122  428718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:09:16.868146  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.869823  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:09:16.869853  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:09:16.872718  428718 api_server.go:72] duration metric: took 199.388215ms to wait for apiserver process to appear ...
	I1123 09:09:16.872738  428718 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:09:16.872782  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:16.887859  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:09:16.887883  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:09:16.904333  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.909029  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:09:16.909058  428718 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:09:16.927238  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:09:16.927274  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:09:16.948202  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:09:16.948230  428718 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:09:16.968718  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:09:16.968755  428718 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:09:16.986286  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:09:16.986318  428718 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:09:17.003049  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:09:17.003130  428718 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:09:17.018884  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:17.018911  428718 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:09:17.034757  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:18.395495  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.395530  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.395546  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.409704  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.409739  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.873245  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.877442  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:18.877468  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:18.924122  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055941929s)
	I1123 09:09:18.924171  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019794928s)
	I1123 09:09:18.924270  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889470808s)
	I1123 09:09:18.926158  428718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-531046 addons enable metrics-server
	
	I1123 09:09:18.934451  428718 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:09:18.935583  428718 addons.go:530] duration metric: took 2.262123063s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 09:09:19.373799  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.378037  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:19.378064  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:19.873454  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.878905  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:09:19.879992  428718 api_server.go:141] control plane version: v1.34.1
	I1123 09:09:19.880021  428718 api_server.go:131] duration metric: took 3.007275014s to wait for apiserver health ...
	I1123 09:09:19.880032  428718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:09:19.883383  428718 system_pods.go:59] 8 kube-system pods found
	I1123 09:09:19.883415  428718 system_pods.go:61] "coredns-66bc5c9577-gk265" [0216f458-438b-4260-8320-f81fb2a01fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883422  428718 system_pods.go:61] "etcd-newest-cni-531046" [1003fb1b-b28b-499c-b1e6-5c8b3d23d4bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:09:19.883428  428718 system_pods.go:61] "kindnet-pbp7c" [72da9944-1b43-4f59-b27a-78a6ebd8f3dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:09:19.883437  428718 system_pods.go:61] "kube-apiserver-newest-cni-531046" [92975545-d846-4326-9cc5-cf12a61f834b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:09:19.883445  428718 system_pods.go:61] "kube-controller-manager-newest-cni-531046" [769616d3-3a60-45b1-9246-80ccba447cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:09:19.883460  428718 system_pods.go:61] "kube-proxy-4bpzx" [a0812143-d250-4445-85b7-dc7d4dbb23ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:09:19.883468  428718 system_pods.go:61] "kube-scheduler-newest-cni-531046" [f713d5f5-1579-48f4-b2f3-9340bfc94c84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:09:19.883479  428718 system_pods.go:61] "storage-provisioner" [d15b527f-4a7d-4cd4-bd83-5f0ec906909f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883485  428718 system_pods.go:74] duration metric: took 3.447563ms to wait for pod list to return data ...
	I1123 09:09:19.883492  428718 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:09:19.886038  428718 default_sa.go:45] found service account: "default"
	I1123 09:09:19.886055  428718 default_sa.go:55] duration metric: took 2.555301ms for default service account to be created ...
	I1123 09:09:19.886067  428718 kubeadm.go:587] duration metric: took 3.212741373s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:19.886084  428718 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:09:19.888475  428718 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:09:19.888510  428718 node_conditions.go:123] node cpu capacity is 8
	I1123 09:09:19.888527  428718 node_conditions.go:105] duration metric: took 2.434606ms to run NodePressure ...
	I1123 09:09:19.888549  428718 start.go:242] waiting for startup goroutines ...
	I1123 09:09:19.888563  428718 start.go:247] waiting for cluster config update ...
	I1123 09:09:19.888578  428718 start.go:256] writing updated cluster config ...
	I1123 09:09:19.888867  428718 ssh_runner.go:195] Run: rm -f paused
	I1123 09:09:19.937632  428718 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:19.945384  428718 out.go:179] * Done! kubectl is now configured to use "newest-cni-531046" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.271490312Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.274897955Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e5e1c26b-0364-441b-9faf-63318ebbc36c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.275681084Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=12139da1-010b-4656-ab09-dcea6d0ef3d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.276371133Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.277094522Z" level=info msg="Ran pod sandbox 5127df5706a5bb844434d60a2815c8fab5d3aecf7aef87f538ccd1d3d4a7ad8c with infra container: kube-system/kindnet-pbp7c/POD" id=e5e1c26b-0364-441b-9faf-63318ebbc36c name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.277297562Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.278238609Z" level=info msg="Ran pod sandbox 8c7cc31914601b5bb0fb0efbdaa827f9d440e1e4fadc7540ab3139122ed5a602 with infra container: kube-system/kube-proxy-4bpzx/POD" id=12139da1-010b-4656-ab09-dcea6d0ef3d9 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.278553832Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e0d8f97a-5f18-48f0-b57a-9e5780f6ef90 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.280661658Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=5bac86f3-68b4-40a3-b4ae-f64b1df9ab94 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.280748019Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=84ef2cb8-47b1-4cc3-8983-702fa7b46c64 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.281603051Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=f7e42029-8f67-40f9-a04b-c563f0944c16 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.281800569Z" level=info msg="Creating container: kube-system/kindnet-pbp7c/kindnet-cni" id=4c8a5bcb-33d6-4f56-ae86-4dc498a35b4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.281870526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.282529509Z" level=info msg="Creating container: kube-system/kube-proxy-4bpzx/kube-proxy" id=f0d14f4e-ccfd-4aed-a916-50e9d63a718b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.282644687Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.285869229Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.286371119Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.288607701Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.289099414Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.31317035Z" level=info msg="Created container 49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6: kube-system/kindnet-pbp7c/kindnet-cni" id=4c8a5bcb-33d6-4f56-ae86-4dc498a35b4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.313801329Z" level=info msg="Starting container: 49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6" id=0651be80-02c7-46a8-be7f-83b179f765b4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.31535771Z" level=info msg="Started container" PID=1052 containerID=49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6 description=kube-system/kindnet-pbp7c/kindnet-cni id=0651be80-02c7-46a8-be7f-83b179f765b4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=5127df5706a5bb844434d60a2815c8fab5d3aecf7aef87f538ccd1d3d4a7ad8c
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.316099758Z" level=info msg="Created container edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b: kube-system/kube-proxy-4bpzx/kube-proxy" id=f0d14f4e-ccfd-4aed-a916-50e9d63a718b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.31661292Z" level=info msg="Starting container: edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b" id=2d5b43a9-41aa-4caf-bf98-53aacd5762f1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:09:19 newest-cni-531046 crio[526]: time="2025-11-23T09:09:19.3192491Z" level=info msg="Started container" PID=1053 containerID=edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b description=kube-system/kube-proxy-4bpzx/kube-proxy id=2d5b43a9-41aa-4caf-bf98-53aacd5762f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8c7cc31914601b5bb0fb0efbdaa827f9d440e1e4fadc7540ab3139122ed5a602
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	edc56583186f2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   6 seconds ago       Running             kube-proxy                1                   8c7cc31914601       kube-proxy-4bpzx                            kube-system
	49f6ca7e606fe       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 seconds ago       Running             kindnet-cni               1                   5127df5706a5b       kindnet-pbp7c                               kube-system
	b8d492ab9433e       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   9 seconds ago       Running             kube-scheduler            1                   b0893a49d6be7       kube-scheduler-newest-cni-531046            kube-system
	0349a0b9c0911       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   9 seconds ago       Running             kube-controller-manager   1                   ab2bcdb421604       kube-controller-manager-newest-cni-531046   kube-system
	6a43edcb0ace5       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   9 seconds ago       Running             kube-apiserver            1                   546863202fe60       kube-apiserver-newest-cni-531046            kube-system
	4e62ba6501972       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   9 seconds ago       Running             etcd                      1                   edcba67ad8509       etcd-newest-cni-531046                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-531046
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-531046
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=newest-cni-531046
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_08_59_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:08:55 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-531046
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:09:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 23 Nov 2025 09:09:18 +0000   Sun, 23 Nov 2025 09:08:54 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    newest-cni-531046
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                269c937c-ad30-473c-998a-d61087f9e09b
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-531046                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-pbp7c                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-531046             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-531046    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-4bpzx                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-531046             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21s                kube-proxy       
	  Normal  Starting                 6s                 kube-proxy       
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node newest-cni-531046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)  kubelet          Node newest-cni-531046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x8 over 32s)  kubelet          Node newest-cni-531046 status is now: NodeHasSufficientPID
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s                kubelet          Node newest-cni-531046 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s                kubelet          Node newest-cni-531046 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s                kubelet          Node newest-cni-531046 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           23s                node-controller  Node newest-cni-531046 event: Registered Node newest-cni-531046 in Controller
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-531046 event: Registered Node newest-cni-531046 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0] <==
	{"level":"warn","ts":"2025-11-23T09:09:17.749617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.758834Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35410","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.766435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.772518Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.779540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.787271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.794078Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35484","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.800070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.807372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.815102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.822508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35584","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.829593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35602","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.836729Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35606","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.843544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.851213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.858080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.865567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.872575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.879508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.887268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35730","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.895223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.908892Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.916022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.923805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:09:17.977912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35848","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:09:25 up  1:51,  0 user,  load average: 4.92, 4.51, 2.94
	Linux newest-cni-531046 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [49f6ca7e606fee383c1970cc49393b673cbe10bab961ad5f1ec4a8fad85217f6] <==
	I1123 09:09:19.521595       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:09:19.521803       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1123 09:09:19.521928       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:09:19.521949       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:09:19.521961       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:09:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:09:19.771233       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:09:19.771782       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:09:19.771839       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:09:19.771961       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:09:20.118631       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:09:20.118704       1 metrics.go:72] Registering metrics
	I1123 09:09:20.118831       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d] <==
	I1123 09:09:18.477747       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:09:18.477755       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:09:18.477762       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:09:18.477864       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1123 09:09:18.477911       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:09:18.478610       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:09:18.482918       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:09:18.482949       1 policy_source.go:240] refreshing policies
	E1123 09:09:18.484042       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:09:18.484110       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1123 09:09:18.488307       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1123 09:09:18.530564       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1123 09:09:18.530744       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:09:18.749324       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:09:18.779168       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:09:18.798312       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:09:18.804466       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:09:18.811278       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:09:18.840084       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.234.181"}
	I1123 09:09:18.849249       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.142.213"}
	I1123 09:09:19.380752       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:09:21.821287       1 controller.go:667] quota admission added evaluator for: endpoints
	I1123 09:09:22.174514       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1123 09:09:22.370961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:09:22.421478       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b] <==
	I1123 09:09:21.785114       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1123 09:09:21.795359       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:09:21.801696       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:09:21.801711       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:09:21.801718       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:09:21.803837       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1123 09:09:21.818682       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:09:21.818716       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1123 09:09:21.818724       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:09:21.818669       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1123 09:09:21.818795       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:09:21.818803       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1123 09:09:21.818829       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1123 09:09:21.818866       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1123 09:09:21.819002       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:09:21.819006       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1123 09:09:21.820884       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:09:21.823164       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1123 09:09:21.823209       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:09:21.825730       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:09:21.827471       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:09:21.830703       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:09:21.833014       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1123 09:09:21.834523       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:09:21.842872       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [edc56583186f2258bd9fdb6eba895c5753f1cecec4a5044814b36d957042477b] <==
	I1123 09:09:19.352265       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:09:19.409458       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:09:19.510113       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:09:19.510144       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1123 09:09:19.510224       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:09:19.534117       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:09:19.534177       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:09:19.539680       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:09:19.540111       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:09:19.540140       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:09:19.541745       1 config.go:200] "Starting service config controller"
	I1123 09:09:19.541827       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:09:19.541866       1 config.go:309] "Starting node config controller"
	I1123 09:09:19.541891       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:09:19.541960       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:09:19.541998       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:09:19.542046       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:09:19.542065       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:09:19.642439       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1123 09:09:19.642484       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:09:19.642569       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:09:19.642582       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df] <==
	I1123 09:09:17.203027       1 serving.go:386] Generated self-signed cert in-memory
	W1123 09:09:18.407367       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1123 09:09:18.407430       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1123 09:09:18.407444       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1123 09:09:18.407455       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1123 09:09:18.447583       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:09:18.447615       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:09:18.450436       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:09:18.450489       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:09:18.451503       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:09:18.451601       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:09:18.551371       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.009715     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-531046\" not found" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.010014     676 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-531046\" not found" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.468696     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.534532     676 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.534619     676 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.534653     676 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.535540     676 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.578481     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-531046\" already exists" pod="kube-system/etcd-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.578520     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.585702     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-531046\" already exists" pod="kube-system/kube-apiserver-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.585743     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.592328     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-531046\" already exists" pod="kube-system/kube-controller-manager-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.592361     676 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: E1123 09:09:18.596836     676 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-531046\" already exists" pod="kube-system/kube-scheduler-newest-cni-531046"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.964188     676 apiserver.go:52] "Watching apiserver"
	Nov 23 09:09:18 newest-cni-531046 kubelet[676]: I1123 09:09:18.968943     676 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038563     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-lib-modules\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038613     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a0812143-d250-4445-85b7-dc7d4dbb23ad-xtables-lock\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038637     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a0812143-d250-4445-85b7-dc7d4dbb23ad-lib-modules\") pod \"kube-proxy-4bpzx\" (UID: \"a0812143-d250-4445-85b7-dc7d4dbb23ad\") " pod="kube-system/kube-proxy-4bpzx"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038694     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-cni-cfg\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:19 newest-cni-531046 kubelet[676]: I1123 09:09:19.038730     676 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/72da9944-1b43-4f59-b27a-78a6ebd8f3dc-xtables-lock\") pod \"kindnet-pbp7c\" (UID: \"72da9944-1b43-4f59-b27a-78a6ebd8f3dc\") " pod="kube-system/kindnet-pbp7c"
	Nov 23 09:09:20 newest-cni-531046 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:09:20 newest-cni-531046 kubelet[676]: I1123 09:09:20.972832     676 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 23 09:09:20 newest-cni-531046 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:09:20 newest-cni-531046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531046 -n newest-cni-531046
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-531046 -n newest-cni-531046: exit status 2 (337.180895ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-531046 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f: exit status 1 (68.524218ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-gk265" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-s9dpj" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-bxx6f" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-531046 describe pod coredns-66bc5c9577-gk265 storage-provisioner dashboard-metrics-scraper-6ffb444bf9-s9dpj kubernetes-dashboard-855c9754f9-bxx6f: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-602386 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-602386 --alsologtostderr -v=1: exit status 80 (2.478607202s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-602386 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 09:09:25.189952  433992 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:25.190110  433992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:25.190123  433992 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:25.190130  433992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:25.190427  433992 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:25.190766  433992 out.go:368] Setting JSON to false
	I1123 09:09:25.190795  433992 mustload.go:66] Loading cluster: default-k8s-diff-port-602386
	I1123 09:09:25.191323  433992 config.go:182] Loaded profile config "default-k8s-diff-port-602386": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:25.191903  433992 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-602386 --format={{.State.Status}}
	I1123 09:09:25.212772  433992 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:09:25.213133  433992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:25.274264  433992 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:25.263993045 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:25.274842  433992 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-602386 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1123 09:09:25.276651  433992 out.go:179] * Pausing node default-k8s-diff-port-602386 ... 
	I1123 09:09:25.277661  433992 host.go:66] Checking if "default-k8s-diff-port-602386" exists ...
	I1123 09:09:25.277996  433992 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:25.278046  433992 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-602386
	I1123 09:09:25.296807  433992 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33123 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/default-k8s-diff-port-602386/id_rsa Username:docker}
	I1123 09:09:25.396502  433992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:25.419591  433992 pause.go:52] kubelet running: true
	I1123 09:09:25.419670  433992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:25.604869  433992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:25.604978  433992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:25.675254  433992 cri.go:89] found id: "f14ee7020f52eaf7fcb7b295bf1d8c156df5ee3eb3c0b0ceb8d09c76d808ccc2"
	I1123 09:09:25.675287  433992 cri.go:89] found id: "cd96cf7cc0e773f467f3b68dff638e0dd554eef88b837e152f12725e95e7f10d"
	I1123 09:09:25.675293  433992 cri.go:89] found id: "aa26fc448ed8012666658fc3bdc730115691445a45555fec8b7f533709c28996"
	I1123 09:09:25.675300  433992 cri.go:89] found id: "afed45fbbc92d2029b02e897ae37cc210e36f8800590cdeafc00d760c4e9fd26"
	I1123 09:09:25.675304  433992 cri.go:89] found id: "5eb3b51fac344707415ffe7f336121a5d12830688403e98b7b0b94240d69fcb1"
	I1123 09:09:25.675311  433992 cri.go:89] found id: "59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2"
	I1123 09:09:25.675321  433992 cri.go:89] found id: "1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613"
	I1123 09:09:25.675326  433992 cri.go:89] found id: "cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98"
	I1123 09:09:25.675333  433992 cri.go:89] found id: "88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64"
	I1123 09:09:25.675357  433992 cri.go:89] found id: "11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	I1123 09:09:25.675365  433992 cri.go:89] found id: "91e83b67da04c4cfe73ddf9e56593b3d11b06e0e02c509a14bc1cbdb84283162"
	I1123 09:09:25.675370  433992 cri.go:89] found id: ""
	I1123 09:09:25.675429  433992 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:25.686958  433992 retry.go:31] will retry after 174.437439ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:25Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:25.862404  433992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:25.876076  433992 pause.go:52] kubelet running: false
	I1123 09:09:25.876137  433992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:26.045057  433992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:26.045121  433992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:26.118424  433992 cri.go:89] found id: "f14ee7020f52eaf7fcb7b295bf1d8c156df5ee3eb3c0b0ceb8d09c76d808ccc2"
	I1123 09:09:26.118447  433992 cri.go:89] found id: "cd96cf7cc0e773f467f3b68dff638e0dd554eef88b837e152f12725e95e7f10d"
	I1123 09:09:26.118452  433992 cri.go:89] found id: "aa26fc448ed8012666658fc3bdc730115691445a45555fec8b7f533709c28996"
	I1123 09:09:26.118456  433992 cri.go:89] found id: "afed45fbbc92d2029b02e897ae37cc210e36f8800590cdeafc00d760c4e9fd26"
	I1123 09:09:26.118459  433992 cri.go:89] found id: "5eb3b51fac344707415ffe7f336121a5d12830688403e98b7b0b94240d69fcb1"
	I1123 09:09:26.118462  433992 cri.go:89] found id: "59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2"
	I1123 09:09:26.118465  433992 cri.go:89] found id: "1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613"
	I1123 09:09:26.118468  433992 cri.go:89] found id: "cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98"
	I1123 09:09:26.118471  433992 cri.go:89] found id: "88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64"
	I1123 09:09:26.118495  433992 cri.go:89] found id: "11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	I1123 09:09:26.118506  433992 cri.go:89] found id: "91e83b67da04c4cfe73ddf9e56593b3d11b06e0e02c509a14bc1cbdb84283162"
	I1123 09:09:26.118512  433992 cri.go:89] found id: ""
	I1123 09:09:26.118560  433992 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:26.133454  433992 retry.go:31] will retry after 256.70228ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:26Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:26.391054  433992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:26.403832  433992 pause.go:52] kubelet running: false
	I1123 09:09:26.403881  433992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:26.563993  433992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:26.564091  433992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:26.632283  433992 cri.go:89] found id: "f14ee7020f52eaf7fcb7b295bf1d8c156df5ee3eb3c0b0ceb8d09c76d808ccc2"
	I1123 09:09:26.632309  433992 cri.go:89] found id: "cd96cf7cc0e773f467f3b68dff638e0dd554eef88b837e152f12725e95e7f10d"
	I1123 09:09:26.632315  433992 cri.go:89] found id: "aa26fc448ed8012666658fc3bdc730115691445a45555fec8b7f533709c28996"
	I1123 09:09:26.632320  433992 cri.go:89] found id: "afed45fbbc92d2029b02e897ae37cc210e36f8800590cdeafc00d760c4e9fd26"
	I1123 09:09:26.632324  433992 cri.go:89] found id: "5eb3b51fac344707415ffe7f336121a5d12830688403e98b7b0b94240d69fcb1"
	I1123 09:09:26.632329  433992 cri.go:89] found id: "59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2"
	I1123 09:09:26.632332  433992 cri.go:89] found id: "1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613"
	I1123 09:09:26.632335  433992 cri.go:89] found id: "cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98"
	I1123 09:09:26.632338  433992 cri.go:89] found id: "88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64"
	I1123 09:09:26.632345  433992 cri.go:89] found id: "11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	I1123 09:09:26.632366  433992 cri.go:89] found id: "91e83b67da04c4cfe73ddf9e56593b3d11b06e0e02c509a14bc1cbdb84283162"
	I1123 09:09:26.632375  433992 cri.go:89] found id: ""
	I1123 09:09:26.632423  433992 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:26.644289  433992 retry.go:31] will retry after 713.767197ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:26Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:27.358181  433992 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 09:09:27.370962  433992 pause.go:52] kubelet running: false
	I1123 09:09:27.371036  433992 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1123 09:09:27.512986  433992 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1123 09:09:27.513057  433992 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1123 09:09:27.576930  433992 cri.go:89] found id: "f14ee7020f52eaf7fcb7b295bf1d8c156df5ee3eb3c0b0ceb8d09c76d808ccc2"
	I1123 09:09:27.576954  433992 cri.go:89] found id: "cd96cf7cc0e773f467f3b68dff638e0dd554eef88b837e152f12725e95e7f10d"
	I1123 09:09:27.576960  433992 cri.go:89] found id: "aa26fc448ed8012666658fc3bdc730115691445a45555fec8b7f533709c28996"
	I1123 09:09:27.576983  433992 cri.go:89] found id: "afed45fbbc92d2029b02e897ae37cc210e36f8800590cdeafc00d760c4e9fd26"
	I1123 09:09:27.576987  433992 cri.go:89] found id: "5eb3b51fac344707415ffe7f336121a5d12830688403e98b7b0b94240d69fcb1"
	I1123 09:09:27.576992  433992 cri.go:89] found id: "59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2"
	I1123 09:09:27.576996  433992 cri.go:89] found id: "1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613"
	I1123 09:09:27.577001  433992 cri.go:89] found id: "cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98"
	I1123 09:09:27.577005  433992 cri.go:89] found id: "88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64"
	I1123 09:09:27.577014  433992 cri.go:89] found id: "11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	I1123 09:09:27.577018  433992 cri.go:89] found id: "91e83b67da04c4cfe73ddf9e56593b3d11b06e0e02c509a14bc1cbdb84283162"
	I1123 09:09:27.577022  433992 cri.go:89] found id: ""
	I1123 09:09:27.577086  433992 ssh_runner.go:195] Run: sudo runc list -f json
	I1123 09:09:27.590440  433992 out.go:203] 
	W1123 09:09:27.591522  433992 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1123 09:09:27.591545  433992 out.go:285] * 
	* 
	W1123 09:09:27.595468  433992 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1123 09:09:27.596471  433992 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-602386 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-602386
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-602386:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3",
	        "Created": "2025-11-23T09:07:12.808038368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 417101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:08:19.463608458Z",
	            "FinishedAt": "2025-11-23T09:08:18.530680539Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/hosts",
	        "LogPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3-json.log",
	        "Name": "/default-k8s-diff-port-602386",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-602386:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-602386",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3",
	                "LowerDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-602386",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-602386/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-602386",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-602386",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-602386",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c0d7328bf99677c247b5508f576c4e9ba9b74b5f7cb31d47a8bd044eac10674b",
	            "SandboxKey": "/var/run/docker/netns/c0d7328bf996",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-602386": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d9296937e29fbbdf6c66a1bc434a999db9b649eec0fa16933c388a9a19b340fe",
	                    "EndpointID": "d100b2efd07dc0306a961585428808317559df6a14a2f7479a55ba207a9f2205",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:f7:3b:c9:b9:ca",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-602386",
	                        "6c3d05e12551"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386: exit status 2 (336.147599ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-602386 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-602386 logs -n 25: (1.157836289s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-531046 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-531046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ embed-certs-529341 image list --format=json                                                                                                                                                                                                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p embed-certs-529341 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ newest-cni-531046 image list --format=json                                                                                                                                                                                                    │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p newest-cni-531046 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ image   │ default-k8s-diff-port-602386 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p default-k8s-diff-port-602386 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ delete  │ -p newest-cni-531046                                                                                                                                                                                                                          │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:09:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:09:09.393949  428718 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:09.394192  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394201  428718 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:09.394206  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394406  428718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:09.394917  428718 out.go:368] Setting JSON to false
	I1123 09:09:09.396361  428718 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6689,"bootTime":1763882260,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:09:09.396420  428718 start.go:143] virtualization: kvm guest
	I1123 09:09:09.398144  428718 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:09:09.399754  428718 notify.go:221] Checking for updates...
	I1123 09:09:09.399766  428718 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:09:09.402731  428718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:09:09.404051  428718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:09.405353  428718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:09:09.406721  428718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:09:09.408298  428718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:09:09.410076  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:09.410631  428718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:09:09.438677  428718 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:09:09.438842  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.499289  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.488360013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.499392  428718 docker.go:319] overlay module found
	I1123 09:09:09.501298  428718 out.go:179] * Using the docker driver based on existing profile
	I1123 09:09:09.502521  428718 start.go:309] selected driver: docker
	I1123 09:09:09.502539  428718 start.go:927] validating driver "docker" against &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.502628  428718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:09:09.503156  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.567159  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.555013229 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.567643  428718 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:09.567695  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:09.567768  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:09.567832  428718 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.569790  428718 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:09:09.570956  428718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:09:09.573142  428718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:09:09.574347  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:09.574385  428718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:09:09.574403  428718 cache.go:65] Caching tarball of preloaded images
	I1123 09:09:09.574469  428718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:09:09.574518  428718 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:09:09.574535  428718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:09:09.574672  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.596348  428718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:09:09.596375  428718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:09:09.596395  428718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:09:09.596441  428718 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:09:09.596513  428718 start.go:364] duration metric: took 46.31µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:09:09.596535  428718 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:09:09.596546  428718 fix.go:54] fixHost starting: 
	I1123 09:09:09.596775  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.615003  428718 fix.go:112] recreateIfNeeded on newest-cni-531046: state=Stopped err=<nil>
	W1123 09:09:09.615044  428718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:09:10.962211  416838 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:09:10.962238  416838 pod_ready.go:86] duration metric: took 41.505811079s for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.964724  416838 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.968263  416838 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.968282  416838 pod_ready.go:86] duration metric: took 3.536222ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.969953  416838 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.973341  416838 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.973358  416838 pod_ready.go:86] duration metric: took 3.359803ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.975266  416838 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.160920  416838 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:11.160945  416838 pod_ready.go:86] duration metric: took 185.660534ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.361102  416838 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.760631  416838 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:09:11.760661  416838 pod_ready.go:86] duration metric: took 399.534821ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.961014  416838 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360788  416838 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:12.360818  416838 pod_ready.go:86] duration metric: took 399.779479ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360830  416838 pod_ready.go:40] duration metric: took 42.908765939s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:09:12.404049  416838 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:12.405650  416838 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	I1123 09:09:09.616814  428718 out.go:252] * Restarting existing docker container for "newest-cni-531046" ...
	I1123 09:09:09.616880  428718 cli_runner.go:164] Run: docker start newest-cni-531046
	I1123 09:09:09.907672  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.927111  428718 kic.go:430] container "newest-cni-531046" state is running.
	I1123 09:09:09.927497  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:09.947618  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.947894  428718 machine.go:94] provisionDockerMachine start ...
	I1123 09:09:09.948010  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:09.972117  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:09.972394  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:09.972403  428718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:09:09.973126  428718 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56888->127.0.0.1:33133: read: connection reset by peer
	I1123 09:09:13.118820  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.118862  428718 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:09:13.118924  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.137403  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.137732  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.137754  428718 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:09:13.292448  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.292567  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.312639  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.312883  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.312902  428718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:09:13.456742  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:09:13.456786  428718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:09:13.456823  428718 ubuntu.go:190] setting up certificates
	I1123 09:09:13.456836  428718 provision.go:84] configureAuth start
	I1123 09:09:13.456907  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:13.476479  428718 provision.go:143] copyHostCerts
	I1123 09:09:13.476551  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:09:13.476578  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:09:13.476667  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:09:13.476821  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:09:13.476836  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:09:13.476878  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:09:13.476962  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:09:13.476997  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:09:13.477040  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:09:13.477127  428718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:09:13.551036  428718 provision.go:177] copyRemoteCerts
	I1123 09:09:13.551092  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:09:13.551131  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.570388  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:13.674461  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:09:13.692480  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:09:13.711416  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:09:13.728169  428718 provision.go:87] duration metric: took 271.314005ms to configureAuth
	I1123 09:09:13.728202  428718 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:09:13.728420  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:13.728554  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.747174  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.747495  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.747521  428718 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:09:14.068767  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:09:14.068799  428718 machine.go:97] duration metric: took 4.120887468s to provisionDockerMachine
	I1123 09:09:14.068814  428718 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:09:14.068829  428718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:09:14.068900  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:09:14.068945  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.088061  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.190042  428718 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:09:14.193920  428718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:09:14.193952  428718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:09:14.193975  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:09:14.194042  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:09:14.194148  428718 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:09:14.194286  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:09:14.202503  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:14.221567  428718 start.go:296] duration metric: took 152.735823ms for postStartSetup
	I1123 09:09:14.221638  428718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:09:14.221678  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.241073  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.341192  428718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:09:14.345736  428718 fix.go:56] duration metric: took 4.749184186s for fixHost
	I1123 09:09:14.345761  428718 start.go:83] releasing machines lock for "newest-cni-531046", held for 4.749236041s
	I1123 09:09:14.345829  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:14.367424  428718 ssh_runner.go:195] Run: cat /version.json
	I1123 09:09:14.367491  428718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:09:14.367498  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.367566  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.387208  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.388547  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.489744  428718 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:14.553172  428718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:09:14.597710  428718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:09:14.603833  428718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:09:14.603919  428718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:09:14.613685  428718 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:09:14.613716  428718 start.go:496] detecting cgroup driver to use...
	I1123 09:09:14.613753  428718 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:09:14.613814  428718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:09:14.633265  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:09:14.647148  428718 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:09:14.647207  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:09:14.663589  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:09:14.677157  428718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:09:14.766215  428718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:09:14.858401  428718 docker.go:234] disabling docker service ...
	I1123 09:09:14.858470  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:09:14.873312  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:09:14.888170  428718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:09:14.983215  428718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:09:15.073382  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:09:15.086608  428718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:09:15.101866  428718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:09:15.101935  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.111226  428718 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:09:15.111288  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.120834  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.130549  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.140695  428718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:09:15.148854  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.157864  428718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.166336  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.176067  428718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:09:15.183505  428718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:09:15.191000  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.295741  428718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:09:15.433605  428718 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:09:15.433681  428718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:09:15.439424  428718 start.go:564] Will wait 60s for crictl version
	I1123 09:09:15.439490  428718 ssh_runner.go:195] Run: which crictl
	I1123 09:09:15.444124  428718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:09:15.469766  428718 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:09:15.469843  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.500595  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.539580  428718 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:09:15.540673  428718 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:09:15.559666  428718 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:09:15.564697  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.581138  428718 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 09:09:15.582462  428718 kubeadm.go:884] updating cluster {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:09:15.582650  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:15.582727  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.616458  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.616482  428718 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:09:15.616540  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.642742  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.642763  428718 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:09:15.642771  428718 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:09:15.642861  428718 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-531046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:09:15.642928  428718 ssh_runner.go:195] Run: crio config
	I1123 09:09:15.691553  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:15.691572  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:15.691591  428718 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 09:09:15.691621  428718 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-531046 NodeName:newest-cni-531046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:09:15.691777  428718 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-531046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:09:15.691843  428718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:09:15.700340  428718 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:09:15.700413  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:09:15.710236  428718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:09:15.727317  428718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:09:15.743376  428718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 09:09:15.758455  428718 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:09:15.762936  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.773856  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.864228  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:15.886692  428718 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046 for IP: 192.168.76.2
	I1123 09:09:15.886715  428718 certs.go:195] generating shared ca certs ...
	I1123 09:09:15.886734  428718 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:15.886911  428718 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:09:15.886986  428718 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:09:15.887002  428718 certs.go:257] generating profile certs ...
	I1123 09:09:15.887116  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key
	I1123 09:09:15.887192  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be
	I1123 09:09:15.887245  428718 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key
	I1123 09:09:15.887384  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:09:15.887428  428718 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:09:15.887442  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:09:15.887489  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:09:15.887522  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:09:15.887550  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:09:15.887610  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:15.888391  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:09:15.908489  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:09:15.931840  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:09:15.955677  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:09:15.980595  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:09:16.003555  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:09:16.021453  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:09:16.038502  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:09:16.055883  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:09:16.072577  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:09:16.090199  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:09:16.108367  428718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:09:16.122045  428718 ssh_runner.go:195] Run: openssl version
	I1123 09:09:16.128705  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:09:16.136943  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140531  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140588  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.178739  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:09:16.187754  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:09:16.195960  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199816  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199868  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.237427  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:09:16.246469  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:09:16.255027  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258823  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258886  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.299069  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:09:16.308045  428718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:09:16.312321  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:09:16.349349  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:09:16.387826  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:09:16.435139  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:09:16.482951  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:09:16.533236  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:09:16.591746  428718 kubeadm.go:401] StartCluster: {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:16.591897  428718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:09:16.592012  428718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:09:16.623916  428718 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:16.623942  428718 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:16.623948  428718 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:16.623952  428718 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:16.623956  428718 cri.go:89] found id: ""
	I1123 09:09:16.624037  428718 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:09:16.637501  428718 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:16.637584  428718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:09:16.647076  428718 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:09:16.647101  428718 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:09:16.647174  428718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:09:16.656920  428718 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:09:16.658079  428718 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-531046" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.658732  428718 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-531046" cluster setting kubeconfig missing "newest-cni-531046" context setting]
	I1123 09:09:16.659991  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.661957  428718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:09:16.670780  428718 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 09:09:16.670810  428718 kubeadm.go:602] duration metric: took 23.701311ms to restartPrimaryControlPlane
	I1123 09:09:16.670821  428718 kubeadm.go:403] duration metric: took 79.16679ms to StartCluster
	I1123 09:09:16.670837  428718 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.670894  428718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.673044  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.673289  428718 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:09:16.673479  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:16.673459  428718 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:09:16.673557  428718 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-531046"
	I1123 09:09:16.673580  428718 addons.go:70] Setting dashboard=true in profile "newest-cni-531046"
	I1123 09:09:16.673603  428718 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-531046"
	I1123 09:09:16.673610  428718 addons.go:239] Setting addon dashboard=true in "newest-cni-531046"
	W1123 09:09:16.673613  428718 addons.go:248] addon storage-provisioner should already be in state true
	W1123 09:09:16.673619  428718 addons.go:248] addon dashboard should already be in state true
	I1123 09:09:16.673619  428718 addons.go:70] Setting default-storageclass=true in profile "newest-cni-531046"
	I1123 09:09:16.673637  428718 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-531046"
	I1123 09:09:16.673641  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673653  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673957  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674200  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674201  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674767  428718 out.go:179] * Verifying Kubernetes components...
	I1123 09:09:16.675943  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:16.701001  428718 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:09:16.702065  428718 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.702082  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:09:16.702722  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.703253  428718 addons.go:239] Setting addon default-storageclass=true in "newest-cni-531046"
	W1123 09:09:16.703273  428718 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:09:16.703305  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.703772  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.704323  428718 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:09:16.705829  428718 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:09:16.706914  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:09:16.706958  428718 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:09:16.707051  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.741059  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.742145  428718 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.742209  428718 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:09:16.742331  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.744371  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.772639  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.838556  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:16.855010  428718 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:09:16.855122  428718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:09:16.868146  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.869823  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:09:16.869853  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:09:16.872718  428718 api_server.go:72] duration metric: took 199.388215ms to wait for apiserver process to appear ...
	I1123 09:09:16.872738  428718 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:09:16.872782  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:16.887859  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:09:16.887883  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:09:16.904333  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.909029  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:09:16.909058  428718 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:09:16.927238  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:09:16.927274  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:09:16.948202  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:09:16.948230  428718 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:09:16.968718  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:09:16.968755  428718 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:09:16.986286  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:09:16.986318  428718 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:09:17.003049  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:09:17.003130  428718 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:09:17.018884  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:17.018911  428718 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:09:17.034757  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:18.395495  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.395530  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.395546  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.409704  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.409739  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.873245  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.877442  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:18.877468  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:18.924122  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055941929s)
	I1123 09:09:18.924171  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019794928s)
	I1123 09:09:18.924270  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889470808s)
	I1123 09:09:18.926158  428718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-531046 addons enable metrics-server
	
	I1123 09:09:18.934451  428718 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:09:18.935583  428718 addons.go:530] duration metric: took 2.262123063s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 09:09:19.373799  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.378037  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:19.378064  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:19.873454  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.878905  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:09:19.879992  428718 api_server.go:141] control plane version: v1.34.1
	I1123 09:09:19.880021  428718 api_server.go:131] duration metric: took 3.007275014s to wait for apiserver health ...
	I1123 09:09:19.880032  428718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:09:19.883383  428718 system_pods.go:59] 8 kube-system pods found
	I1123 09:09:19.883415  428718 system_pods.go:61] "coredns-66bc5c9577-gk265" [0216f458-438b-4260-8320-f81fb2a01fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883422  428718 system_pods.go:61] "etcd-newest-cni-531046" [1003fb1b-b28b-499c-b1e6-5c8b3d23d4bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:09:19.883428  428718 system_pods.go:61] "kindnet-pbp7c" [72da9944-1b43-4f59-b27a-78a6ebd8f3dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:09:19.883437  428718 system_pods.go:61] "kube-apiserver-newest-cni-531046" [92975545-d846-4326-9cc5-cf12a61f834b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:09:19.883445  428718 system_pods.go:61] "kube-controller-manager-newest-cni-531046" [769616d3-3a60-45b1-9246-80ccba447cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:09:19.883460  428718 system_pods.go:61] "kube-proxy-4bpzx" [a0812143-d250-4445-85b7-dc7d4dbb23ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:09:19.883468  428718 system_pods.go:61] "kube-scheduler-newest-cni-531046" [f713d5f5-1579-48f4-b2f3-9340bfc94c84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:09:19.883479  428718 system_pods.go:61] "storage-provisioner" [d15b527f-4a7d-4cd4-bd83-5f0ec906909f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883485  428718 system_pods.go:74] duration metric: took 3.447563ms to wait for pod list to return data ...
	I1123 09:09:19.883492  428718 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:09:19.886038  428718 default_sa.go:45] found service account: "default"
	I1123 09:09:19.886055  428718 default_sa.go:55] duration metric: took 2.555301ms for default service account to be created ...
	I1123 09:09:19.886067  428718 kubeadm.go:587] duration metric: took 3.212741373s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:19.886084  428718 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:09:19.888475  428718 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:09:19.888510  428718 node_conditions.go:123] node cpu capacity is 8
	I1123 09:09:19.888527  428718 node_conditions.go:105] duration metric: took 2.434606ms to run NodePressure ...
	I1123 09:09:19.888549  428718 start.go:242] waiting for startup goroutines ...
	I1123 09:09:19.888563  428718 start.go:247] waiting for cluster config update ...
	I1123 09:09:19.888578  428718 start.go:256] writing updated cluster config ...
	I1123 09:09:19.888867  428718 ssh_runner.go:195] Run: rm -f paused
	I1123 09:09:19.937632  428718 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:19.945384  428718 out.go:179] * Done! kubectl is now configured to use "newest-cni-531046" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.728926672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.728964463Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.729006911Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.734079097Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.73412483Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.734150713Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.741946353Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.742017625Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.742041733Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.746398075Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.746433372Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.746454893Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.751451745Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.751480193Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.809493307Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9540d810-4ac0-40d4-807f-790ffa5da693 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.816002773Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=af714aed-e4c1-4ecc-839e-a0f4e152f8d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.819307516Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper" id=bb8502e7-1576-47ec-9615-01a4d039db4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.819541623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.827744441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.82834853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.850372073Z" level=info msg="Created container 11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper" id=bb8502e7-1576-47ec-9615-01a4d039db4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.85164864Z" level=info msg="Starting container: 11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd" id=df3de171-8181-4026-91c1-ddc439e4f725 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.856294375Z" level=info msg="Started container" PID=1800 containerID=11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper id=df3de171-8181-4026-91c1-ddc439e4f725 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3a3fe9184c722ec7613aba209e2fd11ae0eb7c3dbb64b8d8e1e20644d8644c0
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.942732102Z" level=info msg="Removing container: 53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566" id=7b29b7cb-a30a-4324-81f5-d1b7bb23b1c4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.95371464Z" level=info msg="Removed container 53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper" id=7b29b7cb-a30a-4324-81f5-d1b7bb23b1c4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	11275f4b0df65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           34 seconds ago       Exited              dashboard-metrics-scraper   2                   d3a3fe9184c72       dashboard-metrics-scraper-6ffb444bf9-4j425             kubernetes-dashboard
	91e83b67da04c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   49 seconds ago       Running             kubernetes-dashboard        0                   6278b30e6324c       kubernetes-dashboard-855c9754f9-kvdxq                  kubernetes-dashboard
	f14ee7020f52e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Running             storage-provisioner         1                   2814f8968b14c       storage-provisioner                                    kube-system
	cd96cf7cc0e77       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           59 seconds ago       Running             coredns                     0                   f1c513ba8f249       coredns-66bc5c9577-64rdm                               kube-system
	c5b6451a6b50f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           59 seconds ago       Running             busybox                     1                   42e61f64f3b4a       busybox                                                default
	aa26fc448ed80       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           59 seconds ago       Running             kube-proxy                  0                   1f1db409c8ec8       kube-proxy-wnrqx                                       kube-system
	afed45fbbc92d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           59 seconds ago       Running             kindnet-cni                 0                   ab1752e4ab5c3       kindnet-kqj66                                          kube-system
	5eb3b51fac344       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Exited              storage-provisioner         0                   2814f8968b14c       storage-provisioner                                    kube-system
	59138b2d82268       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   70ae2660b4ec8       kube-apiserver-default-k8s-diff-port-602386            kube-system
	1adb64fac9cd8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   78578c3b14c60       kube-scheduler-default-k8s-diff-port-602386            kube-system
	cb6038e0d1fc6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   83df9c15b47b4       etcd-default-k8s-diff-port-602386                      kube-system
	88d09657521f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   139aefc5864df       kube-controller-manager-default-k8s-diff-port-602386   kube-system
	
	
	==> coredns [cd96cf7cc0e773f467f3b68dff638e0dd554eef88b837e152f12725e95e7f10d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47939 - 58161 "HINFO IN 7168959024589116106.845910367791227650. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024706783s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-602386
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-602386
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-602386
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_07_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:07:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-602386
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:09:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-602386
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                080d5fdd-e379-43ff-bc41-4910fe3f507a
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-64rdm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-default-k8s-diff-port-602386                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-kqj66                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-602386             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-602386    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-wnrqx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-602386             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4j425              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kvdxq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 111s                 kube-proxy       
	  Normal  Starting                 59s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    118s                 kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  118s                 kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     118s                 kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s                 node-controller  Node default-k8s-diff-port-602386 event: Registered Node default-k8s-diff-port-602386 in Controller
	  Normal  NodeReady                101s                 kubelet          Node default-k8s-diff-port-602386 status is now: NodeReady
	  Normal  Starting                 63s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  63s (x8 over 63s)    kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    63s (x8 over 63s)    kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     63s (x8 over 63s)    kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           57s                  node-controller  Node default-k8s-diff-port-602386 event: Registered Node default-k8s-diff-port-602386 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98] <==
	{"level":"warn","ts":"2025-11-23T09:08:27.296175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.304693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.314810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.323188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.330206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.339239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.346910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.354817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.361342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.369358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.376691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.387109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.396559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.408860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.417889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.430281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.440223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.447914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.515878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:43.151401Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.434518ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361847010322 > lease_revoke:<id:5b339aaff7d5b792>","response":"size:29"}
	{"level":"info","ts":"2025-11-23T09:08:43.151533Z","caller":"traceutil/trace.go:172","msg":"trace[1551499691] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"126.058397ms","start":"2025-11-23T09:08:43.025455Z","end":"2025-11-23T09:08:43.151514Z","steps":["trace[1551499691] 'read index received'  (duration: 39.361µs)","trace[1551499691] 'applied index is now lower than readState.Index'  (duration: 126.017983ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:08:43.151713Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.244277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-602386\" limit:1 ","response":"range_response_count:1 size:5735"}
	{"level":"info","ts":"2025-11-23T09:08:43.151736Z","caller":"traceutil/trace.go:172","msg":"trace[1124825809] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-602386; range_end:; response_count:1; response_revision:636; }","duration":"126.279046ms","start":"2025-11-23T09:08:43.025450Z","end":"2025-11-23T09:08:43.151729Z","steps":["trace[1124825809] 'agreement among raft nodes before linearized reading'  (duration: 126.135484ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:08:44.140214Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.836264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-64rdm\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-23T09:08:44.140265Z","caller":"traceutil/trace.go:172","msg":"trace[180778889] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-64rdm; range_end:; response_count:1; response_revision:638; }","duration":"181.901973ms","start":"2025-11-23T09:08:43.958352Z","end":"2025-11-23T09:08:44.140254Z","steps":["trace[180778889] 'range keys from in-memory index tree'  (duration: 181.715507ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:09:28 up  1:51,  0 user,  load average: 4.53, 4.44, 2.92
	Linux default-k8s-diff-port-602386 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afed45fbbc92d2029b02e897ae37cc210e36f8800590cdeafc00d760c4e9fd26] <==
	I1123 09:08:29.477306       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1123 09:08:29.478504       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 09:08:29.478788       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:08:29.478839       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:08:29.478884       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:08:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:08:29.775730       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:08:29.775801       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:08:29.775813       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:08:29.776775       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:08:30.215877       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:08:30.215911       1 metrics.go:72] Registering metrics
	I1123 09:08:30.216018       1 controller.go:711] "Syncing nftables rules"
	I1123 09:08:39.717376       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:08:39.717481       1 main.go:301] handling current node
	I1123 09:08:49.717158       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:08:49.717197       1 main.go:301] handling current node
	I1123 09:08:59.717359       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:08:59.717399       1 main.go:301] handling current node
	I1123 09:09:09.724763       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:09:09.724800       1 main.go:301] handling current node
	I1123 09:09:19.717155       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:09:19.717203       1 main.go:301] handling current node
	
	
	==> kube-apiserver [59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2] <==
	I1123 09:08:28.182238       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:08:28.182497       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:08:28.182512       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:08:28.182521       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:08:28.182531       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:08:28.181660       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:08:28.181848       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:08:28.190029       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 09:08:28.194568       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:08:28.244016       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:08:28.257245       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:08:28.257585       1 policy_source.go:240] refreshing policies
	I1123 09:08:28.269773       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:08:28.595524       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:08:28.632772       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:08:28.658027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:08:28.669243       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:08:28.682251       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:08:28.729494       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.22.138"}
	I1123 09:08:28.747809       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.247.172"}
	I1123 09:08:29.084270       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:08:31.571001       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:08:31.971544       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:31.971544       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:32.070362       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64] <==
	I1123 09:08:31.573949       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:08:31.577017       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:08:31.578259       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:31.584494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:08:31.585658       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:08:31.600270       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:31.602457       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:08:31.605096       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:08:31.607379       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:08:31.610837       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:08:31.612361       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:08:31.612440       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:08:31.612536       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-602386"
	I1123 09:08:31.612608       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:08:31.614805       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:08:31.615812       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:08:31.615847       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:08:31.615864       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:08:31.615874       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:08:31.616698       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:08:31.616768       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:08:31.618676       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:08:31.619607       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:08:31.619887       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:08:31.626356       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [aa26fc448ed8012666658fc3bdc730115691445a45555fec8b7f533709c28996] <==
	I1123 09:08:29.271073       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:08:29.339330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:08:29.439527       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:08:29.439572       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 09:08:29.439718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:08:29.464634       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:08:29.464695       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:08:29.472192       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:08:29.472661       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:08:29.472747       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:29.474285       1 config.go:200] "Starting service config controller"
	I1123 09:08:29.474349       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:08:29.475191       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:08:29.475334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:08:29.475242       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:08:29.475398       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:08:29.475527       1 config.go:309] "Starting node config controller"
	I1123 09:08:29.475591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:08:29.475616       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:08:29.574460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:08:29.575660       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:08:29.575670       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613] <==
	I1123 09:08:27.464940       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:08:28.818242       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:08:28.818280       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:28.827667       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:28.827701       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:28.827799       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:08:28.827860       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:08:28.828824       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:08:28.828888       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:08:28.827619       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:08:28.829785       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:08:28.928304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:28.931043       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:08:28.931201       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241545     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9c3c3e13-2b77-4be8-8c21-1334abedf770-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4j425\" (UID: \"9c3c3e13-2b77-4be8-8c21-1334abedf770\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425"
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241596     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-892lr\" (UniqueName: \"kubernetes.io/projected/9c3c3e13-2b77-4be8-8c21-1334abedf770-kube-api-access-892lr\") pod \"dashboard-metrics-scraper-6ffb444bf9-4j425\" (UID: \"9c3c3e13-2b77-4be8-8c21-1334abedf770\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425"
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241618     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2c64126-6d33-4b13-b583-f9b044a3f500-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-kvdxq\" (UID: \"a2c64126-6d33-4b13-b583-f9b044a3f500\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvdxq"
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241748     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjxt\" (UniqueName: \"kubernetes.io/projected/a2c64126-6d33-4b13-b583-f9b044a3f500-kube-api-access-6sjxt\") pod \"kubernetes-dashboard-855c9754f9-kvdxq\" (UID: \"a2c64126-6d33-4b13-b583-f9b044a3f500\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvdxq"
	Nov 23 09:08:35 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:35.880322     732 scope.go:117] "RemoveContainer" containerID="3221c64469ef986ecafaabd929a785404d57b5459e34172fab3d56373cef44b3"
	Nov 23 09:08:36 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:36.885413     732 scope.go:117] "RemoveContainer" containerID="3221c64469ef986ecafaabd929a785404d57b5459e34172fab3d56373cef44b3"
	Nov 23 09:08:36 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:36.885579     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:36 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:36.885791     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:08:37 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:37.890371     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:37 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:37.890534     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:08:39 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:39.912683     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvdxq" podStartSLOduration=0.860257868 podStartE2EDuration="7.912663419s" podCreationTimestamp="2025-11-23 09:08:32 +0000 UTC" firstStartedPulling="2025-11-23 09:08:32.487875226 +0000 UTC m=+6.770012927" lastFinishedPulling="2025-11-23 09:08:39.540280769 +0000 UTC m=+13.822418478" observedRunningTime="2025-11-23 09:08:39.911886792 +0000 UTC m=+14.194024501" watchObservedRunningTime="2025-11-23 09:08:39.912663419 +0000 UTC m=+14.194801128"
	Nov 23 09:08:40 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:40.894445     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:40 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:40.894685     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:53.808979     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:53.938860     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:53.940177     732 scope.go:117] "RemoveContainer" containerID="11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:53.941877     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:09:00 default-k8s-diff-port-602386 kubelet[732]: I1123 09:09:00.895036     732 scope.go:117] "RemoveContainer" containerID="11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	Nov 23 09:09:00 default-k8s-diff-port-602386 kubelet[732]: E1123 09:09:00.895316     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:09:13 default-k8s-diff-port-602386 kubelet[732]: I1123 09:09:13.809549     732 scope.go:117] "RemoveContainer" containerID="11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	Nov 23 09:09:13 default-k8s-diff-port-602386 kubelet[732]: E1123 09:09:13.809846     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: kubelet.service: Consumed 1.817s CPU time.
	
	
	==> kubernetes-dashboard [91e83b67da04c4cfe73ddf9e56593b3d11b06e0e02c509a14bc1cbdb84283162] <==
	2025/11/23 09:08:39 Using namespace: kubernetes-dashboard
	2025/11/23 09:08:39 Using in-cluster config to connect to apiserver
	2025/11/23 09:08:39 Using secret token for csrf signing
	2025/11/23 09:08:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:08:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:08:39 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:08:39 Generating JWE encryption key
	2025/11/23 09:08:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:08:39 Initializing JWE encryption key from synchronized object
	2025/11/23 09:08:39 Creating in-cluster Sidecar client
	2025/11/23 09:08:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:39 Serving insecurely on HTTP port: 9090
	2025/11/23 09:09:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:39 Starting overwatch
	
	
	==> storage-provisioner [5eb3b51fac344707415ffe7f336121a5d12830688403e98b7b0b94240d69fcb1] <==
	I1123 09:08:29.202800       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:08:29.207347       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f14ee7020f52eaf7fcb7b295bf1d8c156df5ee3eb3c0b0ceb8d09c76d808ccc2] <==
	W1123 09:09:03.399457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:05.402823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:05.406833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:07.410300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:07.414196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:09.416693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:09.420724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:11.423676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:11.427280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:13.430168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:13.434413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:15.438124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:15.442763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:17.448888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:17.453921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:19.457716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:19.462939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:21.466722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:21.471239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:23.474726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:23.478400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:25.482461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:25.487931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:27.490455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:27.494199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386: exit status 2 (336.801983ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-602386 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-602386
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-602386:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3",
	        "Created": "2025-11-23T09:07:12.808038368Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 417101,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-23T09:08:19.463608458Z",
	            "FinishedAt": "2025-11-23T09:08:18.530680539Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/hosts",
	        "LogPath": "/var/lib/docker/containers/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3/6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3-json.log",
	        "Name": "/default-k8s-diff-port-602386",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-602386:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-602386",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "6c3d05e1255194aef2389817b0cc65aba6302f39805c376439e608e1f11bacd3",
	                "LowerDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d-init/diff:/var/lib/docker/overlay2/5d7426d7b7590330a796f9ce8929a3dfaf6f95af5e86f4e4ea7f2a7f53308616/diff",
	                "MergedDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/merged",
	                "UpperDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/diff",
	                "WorkDir": "/var/lib/docker/overlay2/bb5d6810584e73e290c3816b7cb94fabd3ce1d5d8e0d0a63df744232dca3547d/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-602386",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-602386/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-602386",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-602386",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-602386",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c0d7328bf99677c247b5508f576c4e9ba9b74b5f7cb31d47a8bd044eac10674b",
	            "SandboxKey": "/var/run/docker/netns/c0d7328bf996",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-602386": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d9296937e29fbbdf6c66a1bc434a999db9b649eec0fa16933c388a9a19b340fe",
	                    "EndpointID": "d100b2efd07dc0306a961585428808317559df6a14a2f7479a55ba207a9f2205",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "92:f7:3b:c9:b9:ca",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-602386",
	                        "6c3d05e12551"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386: exit status 2 (331.345281ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-602386 logs -n 25
E1123 09:09:30.318554  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-602386 logs -n 25: (1.074723375s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ old-k8s-version-054094 image list --format=json                                                                                                                                                                                               │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p old-k8s-version-054094 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p old-k8s-version-054094                                                                                                                                                                                                                     │ old-k8s-version-054094       │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ no-preload-619589 image list --format=json                                                                                                                                                                                                    │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ pause   │ -p no-preload-619589 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │                     │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ delete  │ -p no-preload-619589                                                                                                                                                                                                                          │ no-preload-619589            │ jenkins │ v1.37.0 │ 23 Nov 25 09:08 UTC │ 23 Nov 25 09:08 UTC │
	│ addons  │ enable metrics-server -p newest-cni-531046 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ stop    │ -p newest-cni-531046 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ addons  │ enable dashboard -p newest-cni-531046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ start   │ -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ embed-certs-529341 image list --format=json                                                                                                                                                                                                   │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p embed-certs-529341 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ image   │ newest-cni-531046 image list --format=json                                                                                                                                                                                                    │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ delete  │ -p embed-certs-529341                                                                                                                                                                                                                         │ embed-certs-529341           │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p newest-cni-531046 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ image   │ default-k8s-diff-port-602386 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ pause   │ -p default-k8s-diff-port-602386 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-602386 │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │                     │
	│ delete  │ -p newest-cni-531046                                                                                                                                                                                                                          │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	│ delete  │ -p newest-cni-531046                                                                                                                                                                                                                          │ newest-cni-531046            │ jenkins │ v1.37.0 │ 23 Nov 25 09:09 UTC │ 23 Nov 25 09:09 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 09:09:09
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 09:09:09.393949  428718 out.go:360] Setting OutFile to fd 1 ...
	I1123 09:09:09.394192  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394201  428718 out.go:374] Setting ErrFile to fd 2...
	I1123 09:09:09.394206  428718 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 09:09:09.394406  428718 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 09:09:09.394917  428718 out.go:368] Setting JSON to false
	I1123 09:09:09.396361  428718 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6689,"bootTime":1763882260,"procs":405,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 09:09:09.396420  428718 start.go:143] virtualization: kvm guest
	I1123 09:09:09.398144  428718 out.go:179] * [newest-cni-531046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 09:09:09.399754  428718 notify.go:221] Checking for updates...
	I1123 09:09:09.399766  428718 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 09:09:09.402731  428718 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 09:09:09.404051  428718 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:09.405353  428718 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 09:09:09.406721  428718 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 09:09:09.408298  428718 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 09:09:09.410076  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:09.410631  428718 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 09:09:09.438677  428718 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 09:09:09.438842  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.499289  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.488360013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.499392  428718 docker.go:319] overlay module found
	I1123 09:09:09.501298  428718 out.go:179] * Using the docker driver based on existing profile
	I1123 09:09:09.502521  428718 start.go:309] selected driver: docker
	I1123 09:09:09.502539  428718 start.go:927] validating driver "docker" against &{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.502628  428718 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 09:09:09.503156  428718 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 09:09:09.567159  428718 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 09:09:09.555013229 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 09:09:09.567643  428718 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:09.567695  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:09.567768  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:09.567832  428718 start.go:353] cluster config:
	{Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:09.569790  428718 out.go:179] * Starting "newest-cni-531046" primary control-plane node in "newest-cni-531046" cluster
	I1123 09:09:09.570956  428718 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 09:09:09.573142  428718 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1123 09:09:09.574347  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:09.574385  428718 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 09:09:09.574403  428718 cache.go:65] Caching tarball of preloaded images
	I1123 09:09:09.574469  428718 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 09:09:09.574518  428718 preload.go:238] Found /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1123 09:09:09.574535  428718 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1123 09:09:09.574672  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.596348  428718 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1123 09:09:09.596375  428718 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1123 09:09:09.596395  428718 cache.go:243] Successfully downloaded all kic artifacts
	I1123 09:09:09.596441  428718 start.go:360] acquireMachinesLock for newest-cni-531046: {Name:mk2e7449a31b4c230f352b5cfe12c4dd1ce5e4f0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1123 09:09:09.596513  428718 start.go:364] duration metric: took 46.31µs to acquireMachinesLock for "newest-cni-531046"
	I1123 09:09:09.596535  428718 start.go:96] Skipping create...Using existing machine configuration
	I1123 09:09:09.596546  428718 fix.go:54] fixHost starting: 
	I1123 09:09:09.596775  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.615003  428718 fix.go:112] recreateIfNeeded on newest-cni-531046: state=Stopped err=<nil>
	W1123 09:09:09.615044  428718 fix.go:138] unexpected machine state, will restart: <nil>
	I1123 09:09:10.962211  416838 pod_ready.go:94] pod "coredns-66bc5c9577-64rdm" is "Ready"
	I1123 09:09:10.962238  416838 pod_ready.go:86] duration metric: took 41.505811079s for pod "coredns-66bc5c9577-64rdm" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.964724  416838 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.968263  416838 pod_ready.go:94] pod "etcd-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.968282  416838 pod_ready.go:86] duration metric: took 3.536222ms for pod "etcd-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.969953  416838 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.973341  416838 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:10.973358  416838 pod_ready.go:86] duration metric: took 3.359803ms for pod "kube-apiserver-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:10.975266  416838 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.160920  416838 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:11.160945  416838 pod_ready.go:86] duration metric: took 185.660534ms for pod "kube-controller-manager-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.361102  416838 pod_ready.go:83] waiting for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.760631  416838 pod_ready.go:94] pod "kube-proxy-wnrqx" is "Ready"
	I1123 09:09:11.760661  416838 pod_ready.go:86] duration metric: took 399.534821ms for pod "kube-proxy-wnrqx" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:11.961014  416838 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360788  416838 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-602386" is "Ready"
	I1123 09:09:12.360818  416838 pod_ready.go:86] duration metric: took 399.779479ms for pod "kube-scheduler-default-k8s-diff-port-602386" in "kube-system" namespace to be "Ready" or be gone ...
	I1123 09:09:12.360830  416838 pod_ready.go:40] duration metric: took 42.908765939s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1123 09:09:12.404049  416838 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:12.405650  416838 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-602386" cluster and "default" namespace by default
	I1123 09:09:09.616814  428718 out.go:252] * Restarting existing docker container for "newest-cni-531046" ...
	I1123 09:09:09.616880  428718 cli_runner.go:164] Run: docker start newest-cni-531046
	I1123 09:09:09.907672  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:09.927111  428718 kic.go:430] container "newest-cni-531046" state is running.
	I1123 09:09:09.927497  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:09.947618  428718 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/config.json ...
	I1123 09:09:09.947894  428718 machine.go:94] provisionDockerMachine start ...
	I1123 09:09:09.948010  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:09.972117  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:09.972394  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:09.972403  428718 main.go:143] libmachine: About to run SSH command:
	hostname
	I1123 09:09:09.973126  428718 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:56888->127.0.0.1:33133: read: connection reset by peer
	I1123 09:09:13.118820  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.118862  428718 ubuntu.go:182] provisioning hostname "newest-cni-531046"
	I1123 09:09:13.118924  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.137403  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.137732  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.137754  428718 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-531046 && echo "newest-cni-531046" | sudo tee /etc/hostname
	I1123 09:09:13.292448  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-531046
	
	I1123 09:09:13.292567  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.312639  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.312883  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.312902  428718 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-531046' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-531046/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-531046' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1123 09:09:13.456742  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1123 09:09:13.456786  428718 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21969-103686/.minikube CaCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21969-103686/.minikube}
	I1123 09:09:13.456823  428718 ubuntu.go:190] setting up certificates
	I1123 09:09:13.456836  428718 provision.go:84] configureAuth start
	I1123 09:09:13.456907  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:13.476479  428718 provision.go:143] copyHostCerts
	I1123 09:09:13.476551  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem, removing ...
	I1123 09:09:13.476578  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem
	I1123 09:09:13.476667  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/ca.pem (1078 bytes)
	I1123 09:09:13.476821  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem, removing ...
	I1123 09:09:13.476836  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem
	I1123 09:09:13.476878  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/cert.pem (1123 bytes)
	I1123 09:09:13.476962  428718 exec_runner.go:144] found /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem, removing ...
	I1123 09:09:13.476997  428718 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem
	I1123 09:09:13.477040  428718 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21969-103686/.minikube/key.pem (1675 bytes)
	I1123 09:09:13.477127  428718 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem org=jenkins.newest-cni-531046 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-531046]
	I1123 09:09:13.551036  428718 provision.go:177] copyRemoteCerts
	I1123 09:09:13.551092  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1123 09:09:13.551131  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.570388  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:13.674461  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1123 09:09:13.692480  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1123 09:09:13.711416  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1123 09:09:13.728169  428718 provision.go:87] duration metric: took 271.314005ms to configureAuth
	I1123 09:09:13.728202  428718 ubuntu.go:206] setting minikube options for container-runtime
	I1123 09:09:13.728420  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:13.728554  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:13.747174  428718 main.go:143] libmachine: Using SSH client type: native
	I1123 09:09:13.747495  428718 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33133 <nil> <nil>}
	I1123 09:09:13.747521  428718 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1123 09:09:14.068767  428718 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1123 09:09:14.068799  428718 machine.go:97] duration metric: took 4.120887468s to provisionDockerMachine
	I1123 09:09:14.068814  428718 start.go:293] postStartSetup for "newest-cni-531046" (driver="docker")
	I1123 09:09:14.068829  428718 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1123 09:09:14.068900  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1123 09:09:14.068945  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.088061  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.190042  428718 ssh_runner.go:195] Run: cat /etc/os-release
	I1123 09:09:14.193920  428718 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1123 09:09:14.193952  428718 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1123 09:09:14.193975  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/addons for local assets ...
	I1123 09:09:14.194042  428718 filesync.go:126] Scanning /home/jenkins/minikube-integration/21969-103686/.minikube/files for local assets ...
	I1123 09:09:14.194148  428718 filesync.go:149] local asset: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem -> 1072342.pem in /etc/ssl/certs
	I1123 09:09:14.194286  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1123 09:09:14.202503  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:14.221567  428718 start.go:296] duration metric: took 152.735823ms for postStartSetup
	I1123 09:09:14.221638  428718 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 09:09:14.221678  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.241073  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.341192  428718 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1123 09:09:14.345736  428718 fix.go:56] duration metric: took 4.749184186s for fixHost
	I1123 09:09:14.345761  428718 start.go:83] releasing machines lock for "newest-cni-531046", held for 4.749236041s
	I1123 09:09:14.345829  428718 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-531046
	I1123 09:09:14.367424  428718 ssh_runner.go:195] Run: cat /version.json
	I1123 09:09:14.367491  428718 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1123 09:09:14.367498  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.367566  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:14.387208  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.388547  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:14.489744  428718 ssh_runner.go:195] Run: systemctl --version
	I1123 09:09:14.553172  428718 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1123 09:09:14.597710  428718 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1123 09:09:14.603833  428718 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1123 09:09:14.603919  428718 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1123 09:09:14.613685  428718 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1123 09:09:14.613716  428718 start.go:496] detecting cgroup driver to use...
	I1123 09:09:14.613753  428718 detect.go:190] detected "systemd" cgroup driver on host os
	I1123 09:09:14.613814  428718 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1123 09:09:14.633265  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1123 09:09:14.647148  428718 docker.go:218] disabling cri-docker service (if available) ...
	I1123 09:09:14.647207  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1123 09:09:14.663589  428718 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1123 09:09:14.677157  428718 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1123 09:09:14.766215  428718 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1123 09:09:14.858401  428718 docker.go:234] disabling docker service ...
	I1123 09:09:14.858470  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1123 09:09:14.873312  428718 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1123 09:09:14.888170  428718 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1123 09:09:14.983215  428718 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1123 09:09:15.073382  428718 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1123 09:09:15.086608  428718 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1123 09:09:15.101866  428718 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1123 09:09:15.101935  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.111226  428718 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1123 09:09:15.111288  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.120834  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.130549  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.140695  428718 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1123 09:09:15.148854  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.157864  428718 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.166336  428718 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1123 09:09:15.176067  428718 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1123 09:09:15.183505  428718 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1123 09:09:15.191000  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.295741  428718 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1123 09:09:15.433605  428718 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1123 09:09:15.433681  428718 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1123 09:09:15.439424  428718 start.go:564] Will wait 60s for crictl version
	I1123 09:09:15.439490  428718 ssh_runner.go:195] Run: which crictl
	I1123 09:09:15.444124  428718 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1123 09:09:15.469766  428718 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1123 09:09:15.469843  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.500595  428718 ssh_runner.go:195] Run: crio --version
	I1123 09:09:15.539580  428718 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1123 09:09:15.540673  428718 cli_runner.go:164] Run: docker network inspect newest-cni-531046 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1123 09:09:15.559666  428718 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1123 09:09:15.564697  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.581138  428718 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1123 09:09:15.582462  428718 kubeadm.go:884] updating cluster {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1123 09:09:15.582650  428718 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 09:09:15.582727  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.616458  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.616482  428718 crio.go:433] Images already preloaded, skipping extraction
	I1123 09:09:15.616540  428718 ssh_runner.go:195] Run: sudo crictl images --output json
	I1123 09:09:15.642742  428718 crio.go:514] all images are preloaded for cri-o runtime.
	I1123 09:09:15.642763  428718 cache_images.go:86] Images are preloaded, skipping loading
	I1123 09:09:15.642771  428718 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1123 09:09:15.642861  428718 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-531046 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1123 09:09:15.642928  428718 ssh_runner.go:195] Run: crio config
	I1123 09:09:15.691553  428718 cni.go:84] Creating CNI manager for ""
	I1123 09:09:15.691572  428718 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 09:09:15.691591  428718 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1123 09:09:15.691621  428718 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-531046 NodeName:newest-cni-531046 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1123 09:09:15.691777  428718 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-531046"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1123 09:09:15.691843  428718 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1123 09:09:15.700340  428718 binaries.go:51] Found k8s binaries, skipping transfer
	I1123 09:09:15.700413  428718 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1123 09:09:15.710236  428718 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1123 09:09:15.727317  428718 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1123 09:09:15.743376  428718 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1123 09:09:15.758455  428718 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1123 09:09:15.762936  428718 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1123 09:09:15.773856  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:15.864228  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:15.886692  428718 certs.go:69] Setting up /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046 for IP: 192.168.76.2
	I1123 09:09:15.886715  428718 certs.go:195] generating shared ca certs ...
	I1123 09:09:15.886734  428718 certs.go:227] acquiring lock for ca certs: {Name:mkeed0bc088459219396fb35c578dfb0927f31a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:15.886911  428718 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key
	I1123 09:09:15.886986  428718 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key
	I1123 09:09:15.887002  428718 certs.go:257] generating profile certs ...
	I1123 09:09:15.887116  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/client.key
	I1123 09:09:15.887192  428718 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key.a1ea44be
	I1123 09:09:15.887245  428718 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key
	I1123 09:09:15.887384  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem (1338 bytes)
	W1123 09:09:15.887428  428718 certs.go:480] ignoring /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234_empty.pem, impossibly tiny 0 bytes
	I1123 09:09:15.887442  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca-key.pem (1675 bytes)
	I1123 09:09:15.887489  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/ca.pem (1078 bytes)
	I1123 09:09:15.887522  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/cert.pem (1123 bytes)
	I1123 09:09:15.887550  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/certs/key.pem (1675 bytes)
	I1123 09:09:15.887610  428718 certs.go:484] found cert: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem (1708 bytes)
	I1123 09:09:15.888391  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1123 09:09:15.908489  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1123 09:09:15.931840  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1123 09:09:15.955677  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1671 bytes)
	I1123 09:09:15.980595  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1123 09:09:16.003555  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1123 09:09:16.021453  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1123 09:09:16.038502  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/newest-cni-531046/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1123 09:09:16.055883  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/ssl/certs/1072342.pem --> /usr/share/ca-certificates/1072342.pem (1708 bytes)
	I1123 09:09:16.072577  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1123 09:09:16.090199  428718 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21969-103686/.minikube/certs/107234.pem --> /usr/share/ca-certificates/107234.pem (1338 bytes)
	I1123 09:09:16.108367  428718 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1123 09:09:16.122045  428718 ssh_runner.go:195] Run: openssl version
	I1123 09:09:16.128705  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1123 09:09:16.136943  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140531  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 23 08:20 /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.140588  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1123 09:09:16.178739  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1123 09:09:16.187754  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/107234.pem && ln -fs /usr/share/ca-certificates/107234.pem /etc/ssl/certs/107234.pem"
	I1123 09:09:16.195960  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199816  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 23 08:25 /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.199868  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/107234.pem
	I1123 09:09:16.237427  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/107234.pem /etc/ssl/certs/51391683.0"
	I1123 09:09:16.246469  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1072342.pem && ln -fs /usr/share/ca-certificates/1072342.pem /etc/ssl/certs/1072342.pem"
	I1123 09:09:16.255027  428718 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258823  428718 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 23 08:25 /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.258886  428718 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1072342.pem
	I1123 09:09:16.299069  428718 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/1072342.pem /etc/ssl/certs/3ec20f2e.0"
	I1123 09:09:16.308045  428718 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1123 09:09:16.312321  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1123 09:09:16.349349  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1123 09:09:16.387826  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1123 09:09:16.435139  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1123 09:09:16.482951  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1123 09:09:16.533236  428718 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1123 09:09:16.591746  428718 kubeadm.go:401] StartCluster: {Name:newest-cni-531046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-531046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 09:09:16.591897  428718 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1123 09:09:16.592012  428718 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1123 09:09:16.623916  428718 cri.go:89] found id: "b8d492ab9433edafd1001b1ad9293c111df36e0796915a8d3f0c6bc7c2cdf3df"
	I1123 09:09:16.623942  428718 cri.go:89] found id: "0349a0b9c0911ac10237b136d83d49de278765fa5222cc116b95ab287527cd9b"
	I1123 09:09:16.623948  428718 cri.go:89] found id: "6a43edcb0ace54dc346700c8af14f2c2903a53edccf3417648cd37fa8485786d"
	I1123 09:09:16.623952  428718 cri.go:89] found id: "4e62ba65019726752dfd1a28db17ceb7288f5f526cdecef122cccdc9395928a0"
	I1123 09:09:16.623956  428718 cri.go:89] found id: ""
	I1123 09:09:16.624037  428718 ssh_runner.go:195] Run: sudo runc list -f json
	W1123 09:09:16.637501  428718 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T09:09:16Z" level=error msg="open /run/runc: no such file or directory"
	I1123 09:09:16.637584  428718 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1123 09:09:16.647076  428718 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1123 09:09:16.647101  428718 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1123 09:09:16.647174  428718 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1123 09:09:16.656920  428718 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1123 09:09:16.658079  428718 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-531046" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.658732  428718 kubeconfig.go:62] /home/jenkins/minikube-integration/21969-103686/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-531046" cluster setting kubeconfig missing "newest-cni-531046" context setting]
	I1123 09:09:16.659991  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.661957  428718 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1123 09:09:16.670780  428718 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1123 09:09:16.670810  428718 kubeadm.go:602] duration metric: took 23.701311ms to restartPrimaryControlPlane
	I1123 09:09:16.670821  428718 kubeadm.go:403] duration metric: took 79.16679ms to StartCluster
	I1123 09:09:16.670837  428718 settings.go:142] acquiring lock: {Name:mk7e59eae8b3289f60fef384e6a5716369959bb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.670894  428718 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 09:09:16.673044  428718 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21969-103686/kubeconfig: {Name:mk8cdc20da988b29689f768b3cb01ff7e637077d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1123 09:09:16.673289  428718 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1123 09:09:16.673479  428718 config.go:182] Loaded profile config "newest-cni-531046": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 09:09:16.673459  428718 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1123 09:09:16.673557  428718 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-531046"
	I1123 09:09:16.673580  428718 addons.go:70] Setting dashboard=true in profile "newest-cni-531046"
	I1123 09:09:16.673603  428718 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-531046"
	I1123 09:09:16.673610  428718 addons.go:239] Setting addon dashboard=true in "newest-cni-531046"
	W1123 09:09:16.673613  428718 addons.go:248] addon storage-provisioner should already be in state true
	W1123 09:09:16.673619  428718 addons.go:248] addon dashboard should already be in state true
	I1123 09:09:16.673619  428718 addons.go:70] Setting default-storageclass=true in profile "newest-cni-531046"
	I1123 09:09:16.673637  428718 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-531046"
	I1123 09:09:16.673641  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673653  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.673957  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674200  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674201  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.674767  428718 out.go:179] * Verifying Kubernetes components...
	I1123 09:09:16.675943  428718 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1123 09:09:16.701001  428718 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1123 09:09:16.702065  428718 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.702082  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1123 09:09:16.702722  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.703253  428718 addons.go:239] Setting addon default-storageclass=true in "newest-cni-531046"
	W1123 09:09:16.703273  428718 addons.go:248] addon default-storageclass should already be in state true
	I1123 09:09:16.703305  428718 host.go:66] Checking if "newest-cni-531046" exists ...
	I1123 09:09:16.703772  428718 cli_runner.go:164] Run: docker container inspect newest-cni-531046 --format={{.State.Status}}
	I1123 09:09:16.704323  428718 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1123 09:09:16.705829  428718 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1123 09:09:16.706914  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1123 09:09:16.706958  428718 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1123 09:09:16.707051  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.741059  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.742145  428718 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.742209  428718 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1123 09:09:16.742331  428718 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-531046
	I1123 09:09:16.744371  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.772639  428718 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33133 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/newest-cni-531046/id_rsa Username:docker}
	I1123 09:09:16.838556  428718 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1123 09:09:16.855010  428718 api_server.go:52] waiting for apiserver process to appear ...
	I1123 09:09:16.855122  428718 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 09:09:16.868146  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1123 09:09:16.869823  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1123 09:09:16.869853  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1123 09:09:16.872718  428718 api_server.go:72] duration metric: took 199.388215ms to wait for apiserver process to appear ...
	I1123 09:09:16.872738  428718 api_server.go:88] waiting for apiserver healthz status ...
	I1123 09:09:16.872782  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:16.887859  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1123 09:09:16.887883  428718 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1123 09:09:16.904333  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1123 09:09:16.909029  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1123 09:09:16.909058  428718 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1123 09:09:16.927238  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1123 09:09:16.927274  428718 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1123 09:09:16.948202  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1123 09:09:16.948230  428718 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1123 09:09:16.968718  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1123 09:09:16.968755  428718 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1123 09:09:16.986286  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1123 09:09:16.986318  428718 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1123 09:09:17.003049  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1123 09:09:17.003130  428718 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1123 09:09:17.018884  428718 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:17.018911  428718 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1123 09:09:17.034757  428718 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1123 09:09:18.395495  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.395530  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.395546  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.409704  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1123 09:09:18.409739  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1123 09:09:18.873245  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:18.877442  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:18.877468  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:18.924122  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.055941929s)
	I1123 09:09:18.924171  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.019794928s)
	I1123 09:09:18.924270  428718 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.889470808s)
	I1123 09:09:18.926158  428718 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-531046 addons enable metrics-server
	
	I1123 09:09:18.934451  428718 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1123 09:09:18.935583  428718 addons.go:530] duration metric: took 2.262123063s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1123 09:09:19.373799  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.378037  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1123 09:09:19.378064  428718 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1123 09:09:19.873454  428718 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1123 09:09:19.878905  428718 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1123 09:09:19.879992  428718 api_server.go:141] control plane version: v1.34.1
	I1123 09:09:19.880021  428718 api_server.go:131] duration metric: took 3.007275014s to wait for apiserver health ...
	I1123 09:09:19.880032  428718 system_pods.go:43] waiting for kube-system pods to appear ...
	I1123 09:09:19.883383  428718 system_pods.go:59] 8 kube-system pods found
	I1123 09:09:19.883415  428718 system_pods.go:61] "coredns-66bc5c9577-gk265" [0216f458-438b-4260-8320-f81fb2a01fd4] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883422  428718 system_pods.go:61] "etcd-newest-cni-531046" [1003fb1b-b28b-499c-b1e6-5c8b3d23d4bf] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1123 09:09:19.883428  428718 system_pods.go:61] "kindnet-pbp7c" [72da9944-1b43-4f59-b27a-78a6ebd8f3dc] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1123 09:09:19.883437  428718 system_pods.go:61] "kube-apiserver-newest-cni-531046" [92975545-d846-4326-9cc5-cf12a61f834b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1123 09:09:19.883445  428718 system_pods.go:61] "kube-controller-manager-newest-cni-531046" [769616d3-3a60-45b1-9246-80ccba447cb5] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1123 09:09:19.883460  428718 system_pods.go:61] "kube-proxy-4bpzx" [a0812143-d250-4445-85b7-dc7d4dbb23ad] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1123 09:09:19.883468  428718 system_pods.go:61] "kube-scheduler-newest-cni-531046" [f713d5f5-1579-48f4-b2f3-9340bfc94c84] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1123 09:09:19.883479  428718 system_pods.go:61] "storage-provisioner" [d15b527f-4a7d-4cd4-bd83-5f0ec906909f] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1123 09:09:19.883485  428718 system_pods.go:74] duration metric: took 3.447563ms to wait for pod list to return data ...
	I1123 09:09:19.883492  428718 default_sa.go:34] waiting for default service account to be created ...
	I1123 09:09:19.886038  428718 default_sa.go:45] found service account: "default"
	I1123 09:09:19.886055  428718 default_sa.go:55] duration metric: took 2.555301ms for default service account to be created ...
	I1123 09:09:19.886067  428718 kubeadm.go:587] duration metric: took 3.212741373s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1123 09:09:19.886084  428718 node_conditions.go:102] verifying NodePressure condition ...
	I1123 09:09:19.888475  428718 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1123 09:09:19.888510  428718 node_conditions.go:123] node cpu capacity is 8
	I1123 09:09:19.888527  428718 node_conditions.go:105] duration metric: took 2.434606ms to run NodePressure ...
	I1123 09:09:19.888549  428718 start.go:242] waiting for startup goroutines ...
	I1123 09:09:19.888563  428718 start.go:247] waiting for cluster config update ...
	I1123 09:09:19.888578  428718 start.go:256] writing updated cluster config ...
	I1123 09:09:19.888867  428718 ssh_runner.go:195] Run: rm -f paused
	I1123 09:09:19.937632  428718 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1123 09:09:19.945384  428718 out.go:179] * Done! kubectl is now configured to use "newest-cni-531046" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.728926672Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.728964463Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.729006911Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.734079097Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.73412483Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.734150713Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.741946353Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.742017625Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.742041733Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.746398075Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.746433372Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.746454893Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.751451745Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 23 09:08:39 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:39.751480193Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.809493307Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=9540d810-4ac0-40d4-807f-790ffa5da693 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.816002773Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=af714aed-e4c1-4ecc-839e-a0f4e152f8d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.819307516Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper" id=bb8502e7-1576-47ec-9615-01a4d039db4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.819541623Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.827744441Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.82834853Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.850372073Z" level=info msg="Created container 11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper" id=bb8502e7-1576-47ec-9615-01a4d039db4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.85164864Z" level=info msg="Starting container: 11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd" id=df3de171-8181-4026-91c1-ddc439e4f725 name=/runtime.v1.RuntimeService/StartContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.856294375Z" level=info msg="Started container" PID=1800 containerID=11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper id=df3de171-8181-4026-91c1-ddc439e4f725 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d3a3fe9184c722ec7613aba209e2fd11ae0eb7c3dbb64b8d8e1e20644d8644c0
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.942732102Z" level=info msg="Removing container: 53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566" id=7b29b7cb-a30a-4324-81f5-d1b7bb23b1c4 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 23 09:08:53 default-k8s-diff-port-602386 crio[568]: time="2025-11-23T09:08:53.95371464Z" level=info msg="Removed container 53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425/dashboard-metrics-scraper" id=7b29b7cb-a30a-4324-81f5-d1b7bb23b1c4 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	11275f4b0df65       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           36 seconds ago       Exited              dashboard-metrics-scraper   2                   d3a3fe9184c72       dashboard-metrics-scraper-6ffb444bf9-4j425             kubernetes-dashboard
	91e83b67da04c       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   50 seconds ago       Running             kubernetes-dashboard        0                   6278b30e6324c       kubernetes-dashboard-855c9754f9-kvdxq                  kubernetes-dashboard
	f14ee7020f52e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Running             storage-provisioner         1                   2814f8968b14c       storage-provisioner                                    kube-system
	cd96cf7cc0e77       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     0                   f1c513ba8f249       coredns-66bc5c9577-64rdm                               kube-system
	c5b6451a6b50f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   42e61f64f3b4a       busybox                                                default
	aa26fc448ed80       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           About a minute ago   Running             kube-proxy                  0                   1f1db409c8ec8       kube-proxy-wnrqx                                       kube-system
	afed45fbbc92d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 0                   ab1752e4ab5c3       kindnet-kqj66                                          kube-system
	5eb3b51fac344       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         0                   2814f8968b14c       storage-provisioner                                    kube-system
	59138b2d82268       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   70ae2660b4ec8       kube-apiserver-default-k8s-diff-port-602386            kube-system
	1adb64fac9cd8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   78578c3b14c60       kube-scheduler-default-k8s-diff-port-602386            kube-system
	cb6038e0d1fc6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   83df9c15b47b4       etcd-default-k8s-diff-port-602386                      kube-system
	88d09657521f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   139aefc5864df       kube-controller-manager-default-k8s-diff-port-602386   kube-system
	
	
	==> coredns [cd96cf7cc0e773f467f3b68dff638e0dd554eef88b837e152f12725e95e7f10d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47939 - 58161 "HINFO IN 7168959024589116106.845910367791227650. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.024706783s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-602386
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-602386
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=50c3a8a3c03e8a84b6c978a884d21c3de8c6d4f1
	                    minikube.k8s.io/name=default-k8s-diff-port-602386
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_23T09_07_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 23 Nov 2025 09:07:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-602386
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 23 Nov 2025 09:09:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 23 Nov 2025 09:09:19 +0000   Sun, 23 Nov 2025 09:07:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-602386
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                080d5fdd-e379-43ff-bc41-4910fe3f507a
	  Boot ID:                    49386d37-de97-442f-8364-9b77578bcf47
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 coredns-66bc5c9577-64rdm                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     114s
	  kube-system                 etcd-default-k8s-diff-port-602386                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m
	  kube-system                 kindnet-kqj66                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-default-k8s-diff-port-602386             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-602386    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 kube-proxy-wnrqx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-default-k8s-diff-port-602386             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-4j425              0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kvdxq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 113s                 kube-proxy       
	  Normal  Starting                 61s                  kube-proxy       
	  Normal  Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s (x8 over 2m5s)  kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m                   kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m                   kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m                   kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           115s                 node-controller  Node default-k8s-diff-port-602386 event: Registered Node default-k8s-diff-port-602386 in Controller
	  Normal  NodeReady                103s                 kubelet          Node default-k8s-diff-port-602386 status is now: NodeReady
	  Normal  Starting                 65s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)    kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)    kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)    kubelet          Node default-k8s-diff-port-602386 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s                  node-controller  Node default-k8s-diff-port-602386 event: Registered Node default-k8s-diff-port-602386 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[ +15.220231] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff ce d6 cd 1c d5 af 08 06
	[  +0.016823] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[  +0.853950] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a f3 da 67 50 34 08 06
	[  +0.000402] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e2 f6 b3 df 65 66 08 06
	[Nov23 09:06] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 3a fe f0 bb b2 e5 08 06
	[  +0.000433] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000017] ll header: 00000000: ff ff ff ff ff ff 42 21 9e 70 54 d2 08 06
	[ +22.099976] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	[  +0.042361] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 6f 93 2c ed 12 08 06
	[ +12.988668] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff c6 40 c7 0d 08 88 08 06
	[  +0.000458] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff c6 f2 c5 3b d5 0a 08 06
	[  +8.074904] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ba d8 15 23 cb ea 08 06
	[  +0.000480] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff fa bc 42 68 ca 38 08 06
	
	
	==> etcd [cb6038e0d1fc65f02647a28477fb55a987cc2404a8c90e7eb192a2e5f4e18b98] <==
	{"level":"warn","ts":"2025-11-23T09:08:27.296175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.304693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.314810Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.323188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.330206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36876","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.339239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.346910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.354817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.361342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.369358Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36970","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.376691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.387109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.396559Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.408860Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.417889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.430281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.440223Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.447914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:27.515878Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-23T09:08:43.151401Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.434518ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766361847010322 > lease_revoke:<id:5b339aaff7d5b792>","response":"size:29"}
	{"level":"info","ts":"2025-11-23T09:08:43.151533Z","caller":"traceutil/trace.go:172","msg":"trace[1551499691] linearizableReadLoop","detail":"{readStateIndex:667; appliedIndex:666; }","duration":"126.058397ms","start":"2025-11-23T09:08:43.025455Z","end":"2025-11-23T09:08:43.151514Z","steps":["trace[1551499691] 'read index received'  (duration: 39.361µs)","trace[1551499691] 'applied index is now lower than readState.Index'  (duration: 126.017983ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-23T09:08:43.151713Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.244277ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-602386\" limit:1 ","response":"range_response_count:1 size:5735"}
	{"level":"info","ts":"2025-11-23T09:08:43.151736Z","caller":"traceutil/trace.go:172","msg":"trace[1124825809] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-602386; range_end:; response_count:1; response_revision:636; }","duration":"126.279046ms","start":"2025-11-23T09:08:43.025450Z","end":"2025-11-23T09:08:43.151729Z","steps":["trace[1124825809] 'agreement among raft nodes before linearized reading'  (duration: 126.135484ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-23T09:08:44.140214Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.836264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-64rdm\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-23T09:08:44.140265Z","caller":"traceutil/trace.go:172","msg":"trace[180778889] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-64rdm; range_end:; response_count:1; response_revision:638; }","duration":"181.901973ms","start":"2025-11-23T09:08:43.958352Z","end":"2025-11-23T09:08:44.140254Z","steps":["trace[180778889] 'range keys from in-memory index tree'  (duration: 181.715507ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:09:30 up  1:51,  0 user,  load average: 4.53, 4.44, 2.92
	Linux default-k8s-diff-port-602386 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [afed45fbbc92d2029b02e897ae37cc210e36f8800590cdeafc00d760c4e9fd26] <==
	I1123 09:08:29.478504       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1123 09:08:29.478788       1 main.go:148] setting mtu 1500 for CNI 
	I1123 09:08:29.478839       1 main.go:178] kindnetd IP family: "ipv4"
	I1123 09:08:29.478884       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-23T09:08:29Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1123 09:08:29.775730       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1123 09:08:29.775801       1 controller.go:381] "Waiting for informer caches to sync"
	I1123 09:08:29.775813       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1123 09:08:29.776775       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1123 09:08:30.215877       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1123 09:08:30.215911       1 metrics.go:72] Registering metrics
	I1123 09:08:30.216018       1 controller.go:711] "Syncing nftables rules"
	I1123 09:08:39.717376       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:08:39.717481       1 main.go:301] handling current node
	I1123 09:08:49.717158       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:08:49.717197       1 main.go:301] handling current node
	I1123 09:08:59.717359       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:08:59.717399       1 main.go:301] handling current node
	I1123 09:09:09.724763       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:09:09.724800       1 main.go:301] handling current node
	I1123 09:09:19.717155       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:09:19.717203       1 main.go:301] handling current node
	I1123 09:09:29.716559       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1123 09:09:29.716613       1 main.go:301] handling current node
	
	
	==> kube-apiserver [59138b2d822688d55c6f5894e7864beb2d6fa20594a1b422e8d201e2f8e1c1e2] <==
	I1123 09:08:28.182238       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1123 09:08:28.182497       1 aggregator.go:171] initial CRD sync complete...
	I1123 09:08:28.182512       1 autoregister_controller.go:144] Starting autoregister controller
	I1123 09:08:28.182521       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1123 09:08:28.182531       1 cache.go:39] Caches are synced for autoregister controller
	I1123 09:08:28.181660       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1123 09:08:28.181848       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1123 09:08:28.190029       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1123 09:08:28.194568       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1123 09:08:28.244016       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1123 09:08:28.257245       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1123 09:08:28.257585       1 policy_source.go:240] refreshing policies
	I1123 09:08:28.269773       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1123 09:08:28.595524       1 controller.go:667] quota admission added evaluator for: namespaces
	I1123 09:08:28.632772       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1123 09:08:28.658027       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1123 09:08:28.669243       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1123 09:08:28.682251       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1123 09:08:28.729494       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.22.138"}
	I1123 09:08:28.747809       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.247.172"}
	I1123 09:08:29.084270       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1123 09:08:31.571001       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1123 09:08:31.971544       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:31.971544       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1123 09:08:32.070362       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [88d09657521f5eeced3d58b537526c35a1a86d0c7389280ba5c54672110cbd64] <==
	I1123 09:08:31.573949       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1123 09:08:31.577017       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1123 09:08:31.578259       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:31.584494       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1123 09:08:31.585658       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1123 09:08:31.600270       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1123 09:08:31.602457       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1123 09:08:31.605096       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1123 09:08:31.607379       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1123 09:08:31.610837       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1123 09:08:31.612361       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1123 09:08:31.612440       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1123 09:08:31.612536       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-602386"
	I1123 09:08:31.612608       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1123 09:08:31.614805       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1123 09:08:31.615812       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1123 09:08:31.615847       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1123 09:08:31.615864       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1123 09:08:31.615874       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1123 09:08:31.616698       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1123 09:08:31.616768       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1123 09:08:31.618676       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1123 09:08:31.619607       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1123 09:08:31.619887       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1123 09:08:31.626356       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [aa26fc448ed8012666658fc3bdc730115691445a45555fec8b7f533709c28996] <==
	I1123 09:08:29.271073       1 server_linux.go:53] "Using iptables proxy"
	I1123 09:08:29.339330       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1123 09:08:29.439527       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1123 09:08:29.439572       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1123 09:08:29.439718       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1123 09:08:29.464634       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1123 09:08:29.464695       1 server_linux.go:132] "Using iptables Proxier"
	I1123 09:08:29.472192       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1123 09:08:29.472661       1 server.go:527] "Version info" version="v1.34.1"
	I1123 09:08:29.472747       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:29.474285       1 config.go:200] "Starting service config controller"
	I1123 09:08:29.474349       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1123 09:08:29.475191       1 config.go:106] "Starting endpoint slice config controller"
	I1123 09:08:29.475334       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1123 09:08:29.475242       1 config.go:403] "Starting serviceCIDR config controller"
	I1123 09:08:29.475398       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1123 09:08:29.475527       1 config.go:309] "Starting node config controller"
	I1123 09:08:29.475591       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1123 09:08:29.475616       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1123 09:08:29.574460       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1123 09:08:29.575660       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1123 09:08:29.575670       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1adb64fac9cd8ca83cde2ea33c1a1d01fd97bd090a659c910fd2247606de3613] <==
	I1123 09:08:27.464940       1 serving.go:386] Generated self-signed cert in-memory
	I1123 09:08:28.818242       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1123 09:08:28.818280       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1123 09:08:28.827667       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:28.827701       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:28.827799       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:08:28.827860       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:08:28.828824       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1123 09:08:28.828888       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1123 09:08:28.827619       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1123 09:08:28.829785       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1123 09:08:28.928304       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1123 09:08:28.931043       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1123 09:08:28.931201       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241545     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/9c3c3e13-2b77-4be8-8c21-1334abedf770-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-4j425\" (UID: \"9c3c3e13-2b77-4be8-8c21-1334abedf770\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425"
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241596     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-892lr\" (UniqueName: \"kubernetes.io/projected/9c3c3e13-2b77-4be8-8c21-1334abedf770-kube-api-access-892lr\") pod \"dashboard-metrics-scraper-6ffb444bf9-4j425\" (UID: \"9c3c3e13-2b77-4be8-8c21-1334abedf770\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425"
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241618     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a2c64126-6d33-4b13-b583-f9b044a3f500-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-kvdxq\" (UID: \"a2c64126-6d33-4b13-b583-f9b044a3f500\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvdxq"
	Nov 23 09:08:32 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:32.241748     732 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjxt\" (UniqueName: \"kubernetes.io/projected/a2c64126-6d33-4b13-b583-f9b044a3f500-kube-api-access-6sjxt\") pod \"kubernetes-dashboard-855c9754f9-kvdxq\" (UID: \"a2c64126-6d33-4b13-b583-f9b044a3f500\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvdxq"
	Nov 23 09:08:35 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:35.880322     732 scope.go:117] "RemoveContainer" containerID="3221c64469ef986ecafaabd929a785404d57b5459e34172fab3d56373cef44b3"
	Nov 23 09:08:36 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:36.885413     732 scope.go:117] "RemoveContainer" containerID="3221c64469ef986ecafaabd929a785404d57b5459e34172fab3d56373cef44b3"
	Nov 23 09:08:36 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:36.885579     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:36 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:36.885791     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:08:37 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:37.890371     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:37 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:37.890534     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:08:39 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:39.912683     732 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kvdxq" podStartSLOduration=0.860257868 podStartE2EDuration="7.912663419s" podCreationTimestamp="2025-11-23 09:08:32 +0000 UTC" firstStartedPulling="2025-11-23 09:08:32.487875226 +0000 UTC m=+6.770012927" lastFinishedPulling="2025-11-23 09:08:39.540280769 +0000 UTC m=+13.822418478" observedRunningTime="2025-11-23 09:08:39.911886792 +0000 UTC m=+14.194024501" watchObservedRunningTime="2025-11-23 09:08:39.912663419 +0000 UTC m=+14.194801128"
	Nov 23 09:08:40 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:40.894445     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:40 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:40.894685     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:53.808979     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:53.938860     732 scope.go:117] "RemoveContainer" containerID="53c19b710b42583be2a6cf92885f9fcdf990c53b41b950a2b0cd3f7ef6687566"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: I1123 09:08:53.940177     732 scope.go:117] "RemoveContainer" containerID="11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	Nov 23 09:08:53 default-k8s-diff-port-602386 kubelet[732]: E1123 09:08:53.941877     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:09:00 default-k8s-diff-port-602386 kubelet[732]: I1123 09:09:00.895036     732 scope.go:117] "RemoveContainer" containerID="11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	Nov 23 09:09:00 default-k8s-diff-port-602386 kubelet[732]: E1123 09:09:00.895316     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:09:13 default-k8s-diff-port-602386 kubelet[732]: I1123 09:09:13.809549     732 scope.go:117] "RemoveContainer" containerID="11275f4b0df65c4816abcdde0d17361833f91eff663b0579a1fa05e5bb378cdd"
	Nov 23 09:09:13 default-k8s-diff-port-602386 kubelet[732]: E1123 09:09:13.809846     732 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-4j425_kubernetes-dashboard(9c3c3e13-2b77-4be8-8c21-1334abedf770)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-4j425" podUID="9c3c3e13-2b77-4be8-8c21-1334abedf770"
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 23 09:09:25 default-k8s-diff-port-602386 systemd[1]: kubelet.service: Consumed 1.817s CPU time.
	
	
	==> kubernetes-dashboard [91e83b67da04c4cfe73ddf9e56593b3d11b06e0e02c509a14bc1cbdb84283162] <==
	2025/11/23 09:08:39 Using namespace: kubernetes-dashboard
	2025/11/23 09:08:39 Using in-cluster config to connect to apiserver
	2025/11/23 09:08:39 Using secret token for csrf signing
	2025/11/23 09:08:39 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/23 09:08:39 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/23 09:08:39 Successful initial request to the apiserver, version: v1.34.1
	2025/11/23 09:08:39 Generating JWE encryption key
	2025/11/23 09:08:39 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/23 09:08:39 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/23 09:08:39 Initializing JWE encryption key from synchronized object
	2025/11/23 09:08:39 Creating in-cluster Sidecar client
	2025/11/23 09:08:39 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:39 Serving insecurely on HTTP port: 9090
	2025/11/23 09:09:09 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/23 09:08:39 Starting overwatch
	
	
	==> storage-provisioner [5eb3b51fac344707415ffe7f336121a5d12830688403e98b7b0b94240d69fcb1] <==
	I1123 09:08:29.202800       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1123 09:08:29.207347       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [f14ee7020f52eaf7fcb7b295bf1d8c156df5ee3eb3c0b0ceb8d09c76d808ccc2] <==
	W1123 09:09:05.406833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:07.410300       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:07.414196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:09.416693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:09.420724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:11.423676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:11.427280       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:13.430168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:13.434413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:15.438124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:15.442763       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:17.448888       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:17.453921       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:19.457716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:19.462939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:21.466722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:21.471239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:23.474726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:23.478400       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:25.482461       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:25.487931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:27.490455       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:27.494199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:29.497993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1123 09:09:29.503389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386: exit status 2 (332.100492ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-602386 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.27s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 12.36
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 11.97
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.08
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.5
21 TestBinaryMirror 0.88
22 TestOffline 55.93
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 106.03
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 10.43
48 TestAddons/StoppedEnableDisable 16.71
49 TestCertOptions 23.5
50 TestCertExpiration 215.9
52 TestForceSystemdFlag 29.36
53 TestForceSystemdEnv 32.41
58 TestErrorSpam/setup 23.78
59 TestErrorSpam/start 0.7
60 TestErrorSpam/status 0.98
61 TestErrorSpam/pause 5.31
62 TestErrorSpam/unpause 5.72
63 TestErrorSpam/stop 12.64
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 41.25
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.19
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.53
75 TestFunctional/serial/CacheCmd/cache/add_local 2.33
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.13
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
83 TestFunctional/serial/ExtraConfig 66.3
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.24
86 TestFunctional/serial/LogsFileCmd 1.25
87 TestFunctional/serial/InvalidService 3.88
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 7.73
91 TestFunctional/parallel/DryRun 0.45
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 1.07
98 TestFunctional/parallel/AddonsCmd 0.2
99 TestFunctional/parallel/PersistentVolumeClaim 29.77
101 TestFunctional/parallel/SSHCmd 0.64
102 TestFunctional/parallel/CpCmd 2.04
103 TestFunctional/parallel/MySQL 17.67
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 2
109 TestFunctional/parallel/NodeLabels 0.08
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.65
113 TestFunctional/parallel/License 0.85
114 TestFunctional/parallel/Version/short 0.06
115 TestFunctional/parallel/Version/components 0.51
116 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
117 TestFunctional/parallel/ImageCommands/ImageListTable 1.15
118 TestFunctional/parallel/ImageCommands/ImageListJson 0.26
119 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
120 TestFunctional/parallel/ImageCommands/ImageBuild 3.6
121 TestFunctional/parallel/ImageCommands/Setup 1.77
122 TestFunctional/parallel/UpdateContextCmd/no_changes 0.18
123 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.17
124 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
125 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
127 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.6
128 TestFunctional/parallel/ProfileCmd/profile_list 0.49
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.27
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.5
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
141 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
145 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
147 TestFunctional/parallel/MountCmd/any-port 7.76
148 TestFunctional/parallel/MountCmd/specific-port 1.79
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.55
150 TestFunctional/parallel/ServiceCmd/List 1.73
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.71
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 114.71
163 TestMultiControlPlane/serial/DeployApp 6.86
164 TestMultiControlPlane/serial/PingHostFromPods 1.02
165 TestMultiControlPlane/serial/AddWorkerNode 26.11
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.92
168 TestMultiControlPlane/serial/CopyFile 17.43
169 TestMultiControlPlane/serial/StopSecondaryNode 14.32
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.75
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.23
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.92
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 117.78
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.6
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.73
176 TestMultiControlPlane/serial/StopCluster 41.66
177 TestMultiControlPlane/serial/RestartCluster 54.31
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.72
179 TestMultiControlPlane/serial/AddSecondaryNode 43.91
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.93
185 TestJSONOutput/start/Command 40.46
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.08
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.24
210 TestKicCustomNetwork/create_custom_network 34.49
211 TestKicCustomNetwork/use_default_bridge_network 25.29
212 TestKicExistingNetwork 22.67
213 TestKicCustomSubnet 26.59
214 TestKicStaticIP 23.47
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 49.7
219 TestMountStart/serial/StartWithMountFirst 7.87
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 7.79
222 TestMountStart/serial/VerifyMountSecond 0.28
223 TestMountStart/serial/DeleteFirst 1.69
224 TestMountStart/serial/VerifyMountPostDelete 0.27
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 8.3
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 68.25
231 TestMultiNode/serial/DeployApp2Nodes 4.03
232 TestMultiNode/serial/PingHostFrom2Pods 0.73
233 TestMultiNode/serial/AddNode 23.24
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.69
236 TestMultiNode/serial/CopyFile 10.05
237 TestMultiNode/serial/StopNode 2.32
238 TestMultiNode/serial/StartAfterStop 7.28
239 TestMultiNode/serial/RestartKeepsNodes 82.46
240 TestMultiNode/serial/DeleteNode 5.29
241 TestMultiNode/serial/StopMultiNode 30.46
242 TestMultiNode/serial/RestartMultiNode 44.46
243 TestMultiNode/serial/ValidateNameConflict 26.1
248 TestPreload 114.75
250 TestScheduledStopUnix 98.63
253 TestInsufficientStorage 12.47
254 TestRunningBinaryUpgrade 58.92
256 TestKubernetesUpgrade 309.82
257 TestMissingContainerUpgrade 125.1
258 TestStoppedBinaryUpgrade/Setup 3.34
262 TestStoppedBinaryUpgrade/Upgrade 89.25
267 TestNetworkPlugins/group/false 4.8
278 TestStoppedBinaryUpgrade/MinikubeLogs 0.96
280 TestPause/serial/Start 45.65
282 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
283 TestNoKubernetes/serial/StartWithK8s 20.26
284 TestNoKubernetes/serial/StartWithStopK8s 16.09
285 TestPause/serial/SecondStartNoReconfiguration 5.84
287 TestNoKubernetes/serial/Start 7.09
288 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
290 TestNoKubernetes/serial/ProfileList 15.56
291 TestNoKubernetes/serial/Stop 1.27
292 TestNoKubernetes/serial/StartNoArgs 7.65
293 TestNetworkPlugins/group/auto/Start 39.36
294 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.29
295 TestNetworkPlugins/group/kindnet/Start 41.57
296 TestNetworkPlugins/group/auto/KubeletFlags 0.29
297 TestNetworkPlugins/group/auto/NetCatPod 8.19
298 TestNetworkPlugins/group/auto/DNS 0.11
299 TestNetworkPlugins/group/auto/Localhost 0.08
300 TestNetworkPlugins/group/auto/HairPin 0.08
301 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.31
303 TestNetworkPlugins/group/kindnet/NetCatPod 9.21
304 TestNetworkPlugins/group/kindnet/DNS 0.1
305 TestNetworkPlugins/group/kindnet/Localhost 0.08
306 TestNetworkPlugins/group/kindnet/HairPin 0.09
307 TestNetworkPlugins/group/calico/Start 51.55
308 TestNetworkPlugins/group/custom-flannel/Start 49
309 TestNetworkPlugins/group/enable-default-cni/Start 36.97
310 TestNetworkPlugins/group/calico/ControllerPod 6.01
311 TestNetworkPlugins/group/calico/KubeletFlags 0.3
312 TestNetworkPlugins/group/calico/NetCatPod 9.19
313 TestNetworkPlugins/group/flannel/Start 49.09
314 TestNetworkPlugins/group/calico/DNS 0.11
315 TestNetworkPlugins/group/calico/Localhost 0.09
316 TestNetworkPlugins/group/calico/HairPin 0.1
317 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
318 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.24
319 TestNetworkPlugins/group/custom-flannel/DNS 0.12
320 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
321 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
322 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.38
323 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.22
324 TestNetworkPlugins/group/bridge/Start 41.72
325 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
326 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
327 TestNetworkPlugins/group/enable-default-cni/HairPin 0.11
329 TestStartStop/group/old-k8s-version/serial/FirstStart 52.57
330 TestNetworkPlugins/group/flannel/ControllerPod 6.01
332 TestStartStop/group/no-preload/serial/FirstStart 56.38
333 TestNetworkPlugins/group/flannel/KubeletFlags 0.43
334 TestNetworkPlugins/group/flannel/NetCatPod 10.26
335 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
336 TestNetworkPlugins/group/bridge/NetCatPod 8.21
337 TestNetworkPlugins/group/flannel/DNS 0.12
338 TestNetworkPlugins/group/flannel/Localhost 0.1
339 TestNetworkPlugins/group/flannel/HairPin 0.12
340 TestNetworkPlugins/group/bridge/DNS 0.12
341 TestNetworkPlugins/group/bridge/Localhost 0.1
342 TestNetworkPlugins/group/bridge/HairPin 0.1
344 TestStartStop/group/embed-certs/serial/FirstStart 41.23
345 TestStartStop/group/old-k8s-version/serial/DeployApp 11.27
347 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.77
349 TestStartStop/group/old-k8s-version/serial/Stop 16.14
350 TestStartStop/group/no-preload/serial/DeployApp 8.26
352 TestStartStop/group/no-preload/serial/Stop 16.35
353 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
354 TestStartStop/group/old-k8s-version/serial/SecondStart 43.02
355 TestStartStop/group/embed-certs/serial/DeployApp 10.27
356 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
357 TestStartStop/group/no-preload/serial/SecondStart 50.52
358 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.26
360 TestStartStop/group/embed-certs/serial/Stop 18.59
362 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.39
363 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
364 TestStartStop/group/embed-certs/serial/SecondStart 47.76
365 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
366 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.23
367 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 53.63
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
369 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
371 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
373 TestStartStop/group/newest-cni/serial/FirstStart 26.58
374 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
375 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
377 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
378 TestStartStop/group/newest-cni/serial/DeployApp 0
380 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
381 TestStartStop/group/newest-cni/serial/Stop 2.53
382 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
383 TestStartStop/group/newest-cni/serial/SecondStart 10.98
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
386 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
387 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.08
388 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
389 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
392 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.27
x
+
TestDownloadOnly/v1.28.0/json-events (12.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-874990 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-874990 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (12.360520474s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (12.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1123 08:19:53.301995  107234 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1123 08:19:53.302106  107234 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-874990
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-874990: exit status 85 (73.020512ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-874990 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-874990 │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:19:40
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:19:40.994832  107246 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:19:40.995104  107246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:19:40.995115  107246 out.go:374] Setting ErrFile to fd 2...
	I1123 08:19:40.995119  107246 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:19:40.995301  107246 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	W1123 08:19:40.995427  107246 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21969-103686/.minikube/config/config.json: open /home/jenkins/minikube-integration/21969-103686/.minikube/config/config.json: no such file or directory
	I1123 08:19:40.995911  107246 out.go:368] Setting JSON to true
	I1123 08:19:40.996770  107246 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3721,"bootTime":1763882260,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:19:40.996828  107246 start.go:143] virtualization: kvm guest
	I1123 08:19:40.999606  107246 out.go:99] [download-only-874990] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1123 08:19:40.999728  107246 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball: no such file or directory
	I1123 08:19:40.999780  107246 notify.go:221] Checking for updates...
	I1123 08:19:41.001062  107246 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:19:41.002332  107246 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:19:41.003534  107246 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:19:41.004590  107246 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:19:41.005622  107246 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 08:19:41.007546  107246 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:19:41.007846  107246 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:19:41.032048  107246 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:19:41.032166  107246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:19:41.093867  107246 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 08:19:41.083812163 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:19:41.093986  107246 docker.go:319] overlay module found
	I1123 08:19:41.095483  107246 out.go:99] Using the docker driver based on user configuration
	I1123 08:19:41.095508  107246 start.go:309] selected driver: docker
	I1123 08:19:41.095514  107246 start.go:927] validating driver "docker" against <nil>
	I1123 08:19:41.095596  107246 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:19:41.152543  107246 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-23 08:19:41.142215733 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:19:41.152698  107246 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:19:41.153237  107246 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 08:19:41.153391  107246 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:19:41.154851  107246 out.go:171] Using Docker driver with root privileges
	I1123 08:19:41.156043  107246 cni.go:84] Creating CNI manager for ""
	I1123 08:19:41.156132  107246 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:19:41.156146  107246 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:19:41.156257  107246 start.go:353] cluster config:
	{Name:download-only-874990 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-874990 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:19:41.158305  107246 out.go:99] Starting "download-only-874990" primary control-plane node in "download-only-874990" cluster
	I1123 08:19:41.158564  107246 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:19:41.159674  107246 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:19:41.159705  107246 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:19:41.159789  107246 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:19:41.177796  107246 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:19:41.177995  107246 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:19:41.178091  107246 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:19:41.734909  107246 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 08:19:41.734946  107246 cache.go:65] Caching tarball of preloaded images
	I1123 08:19:41.735142  107246 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1123 08:19:41.736902  107246 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1123 08:19:41.736921  107246 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 08:19:41.835501  107246 preload.go:295] Got checksum from GCS API "72bc7f8573f574c02d8c9a9b3496176b"
	I1123 08:19:41.835660  107246 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1123 08:19:45.875189  107246 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	
	
	* The control-plane node download-only-874990 host does not exist
	  To start a cluster, run: "minikube start -p download-only-874990"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-874990
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (11.97s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-173580 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-173580 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (11.964862373s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (11.97s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1123 08:20:05.711554  107234 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1123 08:20:05.711600  107234 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-173580
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-173580: exit status 85 (74.311936ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-874990 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-874990 │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │ 23 Nov 25 08:19 UTC │
	│ delete  │ -p download-only-874990                                                                                                                                                   │ download-only-874990 │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │ 23 Nov 25 08:19 UTC │
	│ start   │ -o=json --download-only -p download-only-173580 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-173580 │ jenkins │ v1.37.0 │ 23 Nov 25 08:19 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/23 08:19:53
	Running on machine: ubuntu-20-agent-9
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1123 08:19:53.797721  107629 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:19:53.797832  107629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:19:53.797841  107629 out.go:374] Setting ErrFile to fd 2...
	I1123 08:19:53.797845  107629 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:19:53.798042  107629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:19:53.798476  107629 out.go:368] Setting JSON to true
	I1123 08:19:53.799304  107629 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":3734,"bootTime":1763882260,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:19:53.799357  107629 start.go:143] virtualization: kvm guest
	I1123 08:19:53.801014  107629 out.go:99] [download-only-173580] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:19:53.801141  107629 notify.go:221] Checking for updates...
	I1123 08:19:53.802376  107629 out.go:171] MINIKUBE_LOCATION=21969
	I1123 08:19:53.803660  107629 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:19:53.804933  107629 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:19:53.806027  107629 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:19:53.810430  107629 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1123 08:19:53.812395  107629 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1123 08:19:53.812652  107629 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:19:53.837213  107629 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:19:53.837314  107629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:19:53.898533  107629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-23 08:19:53.889257263 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:19:53.898625  107629 docker.go:319] overlay module found
	I1123 08:19:53.900123  107629 out.go:99] Using the docker driver based on user configuration
	I1123 08:19:53.900145  107629 start.go:309] selected driver: docker
	I1123 08:19:53.900151  107629 start.go:927] validating driver "docker" against <nil>
	I1123 08:19:53.900238  107629 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:19:53.958256  107629 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:51 SystemTime:2025-11-23 08:19:53.949236062 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:19:53.958419  107629 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1123 08:19:53.958854  107629 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1123 08:19:53.959011  107629 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1123 08:19:53.961057  107629 out.go:171] Using Docker driver with root privileges
	I1123 08:19:53.963244  107629 cni.go:84] Creating CNI manager for ""
	I1123 08:19:53.963312  107629 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1123 08:19:53.963324  107629 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1123 08:19:53.963405  107629 start.go:353] cluster config:
	{Name:download-only-173580 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:download-only-173580 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:19:53.964628  107629 out.go:99] Starting "download-only-173580" primary control-plane node in "download-only-173580" cluster
	I1123 08:19:53.964646  107629 cache.go:134] Beginning downloading kic base image for docker with crio
	I1123 08:19:53.965747  107629 out.go:99] Pulling base image v0.0.48-1763789673-21948 ...
	I1123 08:19:53.965783  107629 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:19:53.965808  107629 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1123 08:19:53.982927  107629 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1123 08:19:53.983062  107629 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1123 08:19:53.983082  107629 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1123 08:19:53.983087  107629 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1123 08:19:53.983097  107629 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1123 08:19:54.829675  107629 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1123 08:19:54.829719  107629 cache.go:65] Caching tarball of preloaded images
	I1123 08:19:54.829995  107629 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1123 08:19:54.831683  107629 out.go:99] Downloading Kubernetes v1.34.1 preload ...
	I1123 08:19:54.831703  107629 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 from gcs api...
	I1123 08:19:54.931268  107629 preload.go:295] Got checksum from GCS API "d1a46823b9241c5d38b5e0866197f2a8"
	I1123 08:19:54.931336  107629 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.1/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:d1a46823b9241c5d38b5e0866197f2a8 -> /home/jenkins/minikube-integration/21969-103686/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-173580 host does not exist
	  To start a cluster, run: "minikube start -p download-only-173580"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-173580
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.5s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-494850 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-494850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-494850
--- PASS: TestDownloadOnlyKic (0.50s)

                                                
                                    
x
+
TestBinaryMirror (0.88s)

                                                
                                                
=== RUN   TestBinaryMirror
I1123 08:20:06.958142  107234 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-620131 --alsologtostderr --binary-mirror http://127.0.0.1:33645 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-620131" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-620131
--- PASS: TestBinaryMirror (0.88s)

                                                
                                    
x
+
TestOffline (55.93s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-228886 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-228886 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.025554648s)
helpers_test.go:175: Cleaning up "offline-crio-228886" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-228886
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-228886: (5.899195478s)
--- PASS: TestOffline (55.93s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-450053
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-450053: exit status 85 (75.425755ms)

                                                
                                                
-- stdout --
	* Profile "addons-450053" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-450053"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-450053
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-450053: exit status 85 (75.452893ms)

                                                
                                                
-- stdout --
	* Profile "addons-450053" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-450053"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (106.03s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-450053 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-450053 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (1m46.026516175s)
--- PASS: TestAddons/Setup (106.03s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-450053 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-450053 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.43s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-450053 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-450053 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [822ae99b-9daa-4f74-b15f-1da49fbcf1fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [822ae99b-9daa-4f74-b15f-1da49fbcf1fe] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003581699s
addons_test.go:694: (dbg) Run:  kubectl --context addons-450053 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-450053 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-450053 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.43s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.71s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-450053
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-450053: (16.415607095s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-450053
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-450053
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-450053
--- PASS: TestAddons/StoppedEnableDisable (16.71s)

                                                
                                    
x
+
TestCertOptions (23.5s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-555513 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-555513 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (20.333423948s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-555513 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-555513 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-555513 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-555513" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-555513
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-555513: (2.454878176s)
--- PASS: TestCertOptions (23.50s)

                                                
                                    
x
+
TestCertExpiration (215.9s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-723349 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-723349 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.614311357s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-723349 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-723349 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.82499212s)
helpers_test.go:175: Cleaning up "cert-expiration-723349" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-723349
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-723349: (2.459659044s)
--- PASS: TestCertExpiration (215.90s)

                                                
                                    
x
+
TestForceSystemdFlag (29.36s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-786725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-786725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (26.182864894s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-786725 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-786725" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-786725
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-786725: (2.861633232s)
--- PASS: TestForceSystemdFlag (29.36s)

                                                
                                    
x
+
TestForceSystemdEnv (32.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-696878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-696878 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (29.733966846s)
helpers_test.go:175: Cleaning up "force-systemd-env-696878" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-696878
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-696878: (2.674536351s)
--- PASS: TestForceSystemdEnv (32.41s)

                                                
                                    
x
+
TestErrorSpam/setup (23.78s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-612182 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-612182 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-612182 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-612182 --driver=docker  --container-runtime=crio: (23.784759346s)
--- PASS: TestErrorSpam/setup (23.78s)

                                                
                                    
x
+
TestErrorSpam/start (0.7s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 start --dry-run
--- PASS: TestErrorSpam/start (0.70s)

                                                
                                    
x
+
TestErrorSpam/status (0.98s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 status
--- PASS: TestErrorSpam/status (0.98s)

                                                
                                    
x
+
TestErrorSpam/pause (5.31s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause: exit status 80 (1.632003356s)

                                                
                                                
-- stdout --
	* Pausing node nospam-612182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:25:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause: exit status 80 (1.854684492s)

                                                
                                                
-- stdout --
	* Pausing node nospam-612182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:25:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause: exit status 80 (1.827373573s)

                                                
                                                
-- stdout --
	* Pausing node nospam-612182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:25:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.31s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.72s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause: exit status 80 (1.636712433s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-612182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:25:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause: exit status 80 (2.128445702s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-612182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:25:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause: exit status 80 (1.952823683s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-612182 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-23T08:25:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.72s)

                                                
                                    
x
+
TestErrorSpam/stop (12.64s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 stop: (12.425731748s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-612182 --log_dir /tmp/nospam-612182 stop
--- PASS: TestErrorSpam/stop (12.64s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21969-103686/.minikube/files/etc/test/nested/copy/107234/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (41.25s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709702 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-709702 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (41.254399188s)
--- PASS: TestFunctional/serial/StartWithProxy (41.25s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.19s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1123 08:26:40.490136  107234 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709702 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-709702 --alsologtostderr -v=8: (6.189953909s)
functional_test.go:678: soft start took 6.192143268s for "functional-709702" cluster.
I1123 08:26:46.682316  107234 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.19s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-709702 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 cache add registry.k8s.io/pause:3.1: (1.49359245s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 cache add registry.k8s.io/pause:3.3: (1.567898078s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 cache add registry.k8s.io/pause:latest: (1.470567974s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-709702 /tmp/TestFunctionalserialCacheCmdcacheadd_local3665894022/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cache add minikube-local-cache-test:functional-709702
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 cache add minikube-local-cache-test:functional-709702: (2.00217954s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cache delete minikube-local-cache-test:functional-709702
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-709702
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh sudo crictl inspecti registry.k8s.io/pause:latest
E1123 08:26:54.520761  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:54.527219  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:54.538650  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:54.560061  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:54.601489  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:54.682954  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (299.282835ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cache reload
E1123 08:26:54.845181  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:55.166947  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:55.809106  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1173: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 cache reload: (1.224159543s)
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 kubectl -- --context functional-709702 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-709702 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (66.3s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709702 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1123 08:26:57.090675  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:26:59.652922  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:04.774454  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:15.016546  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:27:35.498091  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-709702 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m6.299656025s)
functional_test.go:776: restart took 1m6.299792692s for "functional-709702" cluster.
I1123 08:28:02.879634  107234 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (66.30s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-709702 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.24s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 logs: (1.234900993s)
--- PASS: TestFunctional/serial/LogsCmd (1.24s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 logs --file /tmp/TestFunctionalserialLogsFileCmd3698926351/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 logs --file /tmp/TestFunctionalserialLogsFileCmd3698926351/001/logs.txt: (1.251307259s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.25s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.88s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-709702 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-709702
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-709702: exit status 115 (353.155895ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32476 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-709702 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.88s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 config get cpus: exit status 14 (78.452554ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 config get cpus: exit status 14 (81.486628ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (7.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-709702 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-709702 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 146629: os: process already finished
E1123 08:29:38.381711  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:31:54.520246  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:32:22.223807  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:36:54.520762  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/DashboardCmd (7.73s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-709702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (193.116629ms)

                                                
                                                
-- stdout --
	* [functional-709702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:28:41.671037  145720 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:28:41.671144  145720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:41.671157  145720 out.go:374] Setting ErrFile to fd 2...
	I1123 08:28:41.671162  145720 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:41.671374  145720 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:28:41.671868  145720 out.go:368] Setting JSON to false
	I1123 08:28:41.673010  145720 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4262,"bootTime":1763882260,"procs":245,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:28:41.673065  145720 start.go:143] virtualization: kvm guest
	I1123 08:28:41.676043  145720 out.go:179] * [functional-709702] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:28:41.677729  145720 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:28:41.677741  145720 notify.go:221] Checking for updates...
	I1123 08:28:41.680962  145720 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:28:41.682319  145720 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:28:41.683724  145720 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:28:41.685063  145720 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:28:41.686284  145720 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:28:41.691029  145720 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:28:41.691779  145720 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:28:41.717985  145720 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:28:41.718099  145720 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:28:41.786840  145720 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-23 08:28:41.773910061 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:28:41.787014  145720 docker.go:319] overlay module found
	I1123 08:28:41.788881  145720 out.go:179] * Using the docker driver based on existing profile
	I1123 08:28:41.790208  145720 start.go:309] selected driver: docker
	I1123 08:28:41.790230  145720 start.go:927] validating driver "docker" against &{Name:functional-709702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-709702 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:28:41.790348  145720 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:28:41.792202  145720 out.go:203] 
	W1123 08:28:41.793643  145720 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1123 08:28:41.796162  145720 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709702 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-709702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-709702 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.222367ms)

                                                
                                                
-- stdout --
	* [functional-709702] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:28:29.218454  143111 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:28:29.218546  143111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:29.218550  143111 out.go:374] Setting ErrFile to fd 2...
	I1123 08:28:29.218554  143111 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:28:29.218861  143111 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:28:29.219295  143111 out.go:368] Setting JSON to false
	I1123 08:28:29.220220  143111 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":4249,"bootTime":1763882260,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:28:29.220284  143111 start.go:143] virtualization: kvm guest
	I1123 08:28:29.226342  143111 out.go:179] * [functional-709702] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1123 08:28:29.227698  143111 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:28:29.227695  143111 notify.go:221] Checking for updates...
	I1123 08:28:29.230261  143111 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:28:29.231621  143111 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:28:29.232873  143111 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:28:29.234063  143111 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:28:29.235102  143111 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:28:29.236574  143111 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:28:29.237207  143111 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:28:29.262148  143111 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:28:29.262268  143111 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:28:29.321796  143111 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:39 OomKillDisable:false NGoroutines:57 SystemTime:2025-11-23 08:28:29.3114549 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:28:29.322252  143111 docker.go:319] overlay module found
	I1123 08:28:29.324546  143111 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1123 08:28:29.325705  143111 start.go:309] selected driver: docker
	I1123 08:28:29.325729  143111 start.go:927] validating driver "docker" against &{Name:functional-709702 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-709702 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1123 08:28:29.325807  143111 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:28:29.327519  143111 out.go:203] 
	W1123 08:28:29.328636  143111 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1123 08:28:29.329895  143111 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 addons list
E1123 08:28:16.459832  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [87801e58-e02b-4fd9-a4a0-6a27293d752c] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.00417564s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-709702 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-709702 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-709702 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-709702 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [baba998a-e26f-43cd-86df-982b4075a9cb] Pending
helpers_test.go:352: "sp-pod" [baba998a-e26f-43cd-86df-982b4075a9cb] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [baba998a-e26f-43cd-86df-982b4075a9cb] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.004539404s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-709702 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-709702 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-709702 apply -f testdata/storage-provisioner/pod.yaml
I1123 08:28:31.506753  107234 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d0e755b1-7067-4866-abeb-6c25a37ed3de] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d0e755b1-7067-4866-abeb-6c25a37ed3de] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003975814s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-709702 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.77s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh -n functional-709702 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cp functional-709702:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1848354076/001/cp-test.txt
I1123 08:28:17.359856  107234 detect.go:223] nested VM detected
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh -n functional-709702 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh -n functional-709702 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.04s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-709702 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-8tl2w" [f4a3abe2-6f8d-4e73-8470-745527598391] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-8tl2w" [f4a3abe2-6f8d-4e73-8470-745527598391] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.003476467s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-709702 exec mysql-5bb876957f-8tl2w -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-709702 exec mysql-5bb876957f-8tl2w -- mysql -ppassword -e "show databases;": exit status 1 (89.31444ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1123 08:28:27.782813  107234 retry.go:31] will retry after 1.289793236s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-709702 exec mysql-5bb876957f-8tl2w -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (17.67s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/107234/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo cat /etc/test/nested/copy/107234/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/107234.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo cat /etc/ssl/certs/107234.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/107234.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo cat /usr/share/ca-certificates/107234.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/1072342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo cat /etc/ssl/certs/1072342.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/1072342.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo cat /usr/share/ca-certificates/1072342.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.00s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-709702 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh "sudo systemctl is-active docker": exit status 1 (325.429275ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh "sudo systemctl is-active containerd": exit status 1 (320.17442ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709702 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709702 image ls --format short --alsologtostderr:
I1123 08:28:43.497610  146832 out.go:360] Setting OutFile to fd 1 ...
I1123 08:28:43.497853  146832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:43.497860  146832 out.go:374] Setting ErrFile to fd 2...
I1123 08:28:43.497864  146832 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:43.498048  146832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
I1123 08:28:43.498635  146832 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:43.498727  146832 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:43.499155  146832 cli_runner.go:164] Run: docker container inspect functional-709702 --format={{.State.Status}}
I1123 08:28:43.517799  146832 ssh_runner.go:195] Run: systemctl --version
I1123 08:28:43.517879  146832 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-709702
I1123 08:28:43.535632  146832 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/functional-709702/id_rsa Username:docker}
I1123 08:28:43.636194  146832 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls --format table --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 image ls --format table --alsologtostderr: (1.154128266s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709702 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-709702  │ 582765d239ff4 │ 1.47MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709702 image ls --format table --alsologtostderr:
I1123 08:28:47.605330  147543 out.go:360] Setting OutFile to fd 1 ...
I1123 08:28:47.605448  147543 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:47.605461  147543 out.go:374] Setting ErrFile to fd 2...
I1123 08:28:47.605468  147543 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:47.605777  147543 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
I1123 08:28:47.606577  147543 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:47.606757  147543 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:47.607298  147543 cli_runner.go:164] Run: docker container inspect functional-709702 --format={{.State.Status}}
I1123 08:28:47.629370  147543 ssh_runner.go:195] Run: systemctl --version
I1123 08:28:47.629436  147543 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-709702
I1123 08:28:47.652228  147543 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/functional-709702/id_rsa Username:docker}
I1123 08:28:47.762908  147543 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709702 image ls --format json --alsologtostderr:
[{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"582765d239ff40260c7a85d43c977340efe2cab12a00cf23a1d3b2676eb20e54","repoDigests":["localhost/my-image@sha256:b5c1be9343c4c15353f638e83c1ae6b13be2caba6993bc052aeaaa6abd16d345"],"repoTags":["localhost/my-image:function
al-709702"],"size":"1468744"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"409467f978b4a30fe717012736557
d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"5
4252718"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"ead4f0c0d51476d435b09d043dfd4f71706e5da2fcd3bba6a604670660
cd8326","repoDigests":["docker.io/library/be1f706c525349396b46716026fe98fd47aaf470872431d26d47738c90d14ab1-tmp@sha256:5d621f51f94c7acc2163eb92eb825d58798f447017d4e85697df07834a23c705"],"repoTags":[],"size":"1466132"},{"id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["
registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":
"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709702 image ls --format json --alsologtostderr:
I1123 08:28:47.342388  147481 out.go:360] Setting OutFile to fd 1 ...
I1123 08:28:47.342521  147481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:47.342532  147481 out.go:374] Setting ErrFile to fd 2...
I1123 08:28:47.342538  147481 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:47.342834  147481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
I1123 08:28:47.343627  147481 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:47.343768  147481 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:47.344350  147481 cli_runner.go:164] Run: docker container inspect functional-709702 --format={{.State.Status}}
I1123 08:28:47.365998  147481 ssh_runner.go:195] Run: systemctl --version
I1123 08:28:47.366055  147481 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-709702
I1123 08:28:47.385658  147481 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/functional-709702/id_rsa Username:docker}
I1123 08:28:47.496929  147481 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709702 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 582765d239ff40260c7a85d43c977340efe2cab12a00cf23a1d3b2676eb20e54
repoDigests:
- localhost/my-image@sha256:b5c1be9343c4c15353f638e83c1ae6b13be2caba6993bc052aeaaa6abd16d345
repoTags:
- localhost/my-image:functional-709702
size: "1468744"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: ead4f0c0d51476d435b09d043dfd4f71706e5da2fcd3bba6a604670660cd8326
repoDigests:
- docker.io/library/be1f706c525349396b46716026fe98fd47aaf470872431d26d47738c90d14ab1-tmp@sha256:5d621f51f94c7acc2163eb92eb825d58798f447017d4e85697df07834a23c705
repoTags: []
size: "1466132"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709702 image ls --format yaml --alsologtostderr:
I1123 08:28:48.752783  147621 out.go:360] Setting OutFile to fd 1 ...
I1123 08:28:48.753105  147621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:48.753118  147621 out.go:374] Setting ErrFile to fd 2...
I1123 08:28:48.753124  147621 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:48.753388  147621 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
I1123 08:28:48.754061  147621 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:48.754193  147621 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:48.754798  147621 cli_runner.go:164] Run: docker container inspect functional-709702 --format={{.State.Status}}
I1123 08:28:48.776124  147621 ssh_runner.go:195] Run: systemctl --version
I1123 08:28:48.776175  147621 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-709702
I1123 08:28:48.796680  147621 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/functional-709702/id_rsa Username:docker}
I1123 08:28:48.896636  147621 ssh_runner.go:195] Run: sudo crictl images --output json
2025/11/23 08:28:49 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh pgrep buildkitd: exit status 1 (277.955424ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image build -t localhost/my-image:functional-709702 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 image build -t localhost/my-image:functional-709702 testdata/build --alsologtostderr: (3.046229397s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-709702 image build -t localhost/my-image:functional-709702 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> ead4f0c0d51
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-709702
--> 582765d239f
Successfully tagged localhost/my-image:functional-709702
582765d239ff40260c7a85d43c977340efe2cab12a00cf23a1d3b2676eb20e54
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-709702 image build -t localhost/my-image:functional-709702 testdata/build --alsologtostderr:
I1123 08:28:44.005453  146994 out.go:360] Setting OutFile to fd 1 ...
I1123 08:28:44.005759  146994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:44.005771  146994 out.go:374] Setting ErrFile to fd 2...
I1123 08:28:44.005778  146994 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1123 08:28:44.006022  146994 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
I1123 08:28:44.006611  146994 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:44.007401  146994 config.go:182] Loaded profile config "functional-709702": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1123 08:28:44.007881  146994 cli_runner.go:164] Run: docker container inspect functional-709702 --format={{.State.Status}}
I1123 08:28:44.026446  146994 ssh_runner.go:195] Run: systemctl --version
I1123 08:28:44.026525  146994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-709702
I1123 08:28:44.044448  146994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/functional-709702/id_rsa Username:docker}
I1123 08:28:44.144718  146994 build_images.go:162] Building image from path: /tmp/build.357301170.tar
I1123 08:28:44.144788  146994 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1123 08:28:44.153009  146994 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.357301170.tar
I1123 08:28:44.156786  146994 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.357301170.tar: stat -c "%s %y" /var/lib/minikube/build/build.357301170.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.357301170.tar': No such file or directory
I1123 08:28:44.156820  146994 ssh_runner.go:362] scp /tmp/build.357301170.tar --> /var/lib/minikube/build/build.357301170.tar (3072 bytes)
I1123 08:28:44.174546  146994 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.357301170
I1123 08:28:44.181928  146994 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.357301170 -xf /var/lib/minikube/build/build.357301170.tar
I1123 08:28:44.189625  146994 crio.go:315] Building image: /var/lib/minikube/build/build.357301170
I1123 08:28:44.189730  146994 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-709702 /var/lib/minikube/build/build.357301170 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1123 08:28:46.965350  146994 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-709702 /var/lib/minikube/build/build.357301170 --cgroup-manager=cgroupfs: (2.775588299s)
I1123 08:28:46.965445  146994 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.357301170
I1123 08:28:46.975640  146994 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.357301170.tar
I1123 08:28:46.985648  146994 build_images.go:218] Built localhost/my-image:functional-709702 from /tmp/build.357301170.tar
I1123 08:28:46.985685  146994 build_images.go:134] succeeded building to: functional-709702
I1123 08:28:46.985691  146994 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.60s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.740681067s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-709702
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-709702 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-709702 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-709702 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-709702 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 139856: os: process already finished
helpers_test.go:525: unable to kill pid 139460: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "401.209403ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "93.621917ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-709702 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-709702 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [059b3dca-b005-4af9-9128-e95ce6dc82b8] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [059b3dca-b005-4af9-9128-e95ce6dc82b8] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.008498436s
I1123 08:28:20.155585  107234 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.27s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "430.578633ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "71.896952ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image rm kicbase/echo-server:functional-709702 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-709702 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.91.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-709702 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdany-port3137681189/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763886509337141800" to /tmp/TestFunctionalparallelMountCmdany-port3137681189/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763886509337141800" to /tmp/TestFunctionalparallelMountCmdany-port3137681189/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763886509337141800" to /tmp/TestFunctionalparallelMountCmdany-port3137681189/001/test-1763886509337141800
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.163657ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:28:29.631673  107234 retry.go:31] will retry after 456.604245ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 23 08:28 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 23 08:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 23 08:28 test-1763886509337141800
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh cat /mount-9p/test-1763886509337141800
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-709702 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [23eff5fd-79fe-4cd3-a6a1-1deb8047db5d] Pending
helpers_test.go:352: "busybox-mount" [23eff5fd-79fe-4cd3-a6a1-1deb8047db5d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [23eff5fd-79fe-4cd3-a6a1-1deb8047db5d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [23eff5fd-79fe-4cd3-a6a1-1deb8047db5d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.003458863s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-709702 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdany-port3137681189/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.76s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdspecific-port3675369866/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (293.078236ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:28:37.385956  107234 retry.go:31] will retry after 443.494751ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdspecific-port3675369866/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh "sudo umount -f /mount-9p": exit status 1 (277.419365ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-709702 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdspecific-port3675369866/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T" /mount1: exit status 1 (341.05742ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1123 08:28:39.225201  107234 retry.go:31] will retry after 320.581159ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-709702 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-709702 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2274503550/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 service list: (1.734712777s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-709702 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-709702 service list -o json: (1.709203722s)
functional_test.go:1504: Took "1.709309913s" to run "out/minikube-linux-amd64 -p functional-709702 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.71s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-709702
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-709702
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-709702
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (114.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m53.936419256s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (114.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 kubectl -- rollout status deployment/busybox: (4.911436059s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-4qp88 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-gtclv -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-tfjvl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-4qp88 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-gtclv -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-tfjvl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-4qp88 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-gtclv -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-tfjvl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-4qp88 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-4qp88 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-gtclv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-gtclv -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-tfjvl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 kubectl -- exec busybox-7b57f96db7-tfjvl -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (26.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 node add --alsologtostderr -v 5: (25.207151538s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (26.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-307893 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.43s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp testdata/cp-test.txt ha-307893:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3143451073/001/cp-test_ha-307893.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893:/home/docker/cp-test.txt ha-307893-m02:/home/docker/cp-test_ha-307893_ha-307893-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test_ha-307893_ha-307893-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893:/home/docker/cp-test.txt ha-307893-m03:/home/docker/cp-test_ha-307893_ha-307893-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test_ha-307893_ha-307893-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893:/home/docker/cp-test.txt ha-307893-m04:/home/docker/cp-test_ha-307893_ha-307893-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test_ha-307893_ha-307893-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp testdata/cp-test.txt ha-307893-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3143451073/001/cp-test_ha-307893-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m02:/home/docker/cp-test.txt ha-307893:/home/docker/cp-test_ha-307893-m02_ha-307893.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test_ha-307893-m02_ha-307893.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m02:/home/docker/cp-test.txt ha-307893-m03:/home/docker/cp-test_ha-307893-m02_ha-307893-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test_ha-307893-m02_ha-307893-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m02:/home/docker/cp-test.txt ha-307893-m04:/home/docker/cp-test_ha-307893-m02_ha-307893-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test_ha-307893-m02_ha-307893-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp testdata/cp-test.txt ha-307893-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3143451073/001/cp-test_ha-307893-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m03:/home/docker/cp-test.txt ha-307893:/home/docker/cp-test_ha-307893-m03_ha-307893.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test_ha-307893-m03_ha-307893.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m03:/home/docker/cp-test.txt ha-307893-m02:/home/docker/cp-test_ha-307893-m03_ha-307893-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test_ha-307893-m03_ha-307893-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m03:/home/docker/cp-test.txt ha-307893-m04:/home/docker/cp-test_ha-307893-m03_ha-307893-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test_ha-307893-m03_ha-307893-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp testdata/cp-test.txt ha-307893-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile3143451073/001/cp-test_ha-307893-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m04:/home/docker/cp-test.txt ha-307893:/home/docker/cp-test_ha-307893-m04_ha-307893.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893 "sudo cat /home/docker/cp-test_ha-307893-m04_ha-307893.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m04:/home/docker/cp-test.txt ha-307893-m02:/home/docker/cp-test_ha-307893-m04_ha-307893-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m02 "sudo cat /home/docker/cp-test_ha-307893-m04_ha-307893-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 cp ha-307893-m04:/home/docker/cp-test.txt ha-307893-m03:/home/docker/cp-test_ha-307893-m04_ha-307893-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 ssh -n ha-307893-m03 "sudo cat /home/docker/cp-test_ha-307893-m04_ha-307893-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 node stop m02 --alsologtostderr -v 5: (13.592159383s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5: exit status 7 (726.109688ms)

                                                
                                                
-- stdout --
	ha-307893
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-307893-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-307893-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-307893-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:41:32.343028  171337 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:41:32.343158  171337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:32.343169  171337 out.go:374] Setting ErrFile to fd 2...
	I1123 08:41:32.343173  171337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:41:32.343394  171337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:41:32.343583  171337 out.go:368] Setting JSON to false
	I1123 08:41:32.343617  171337 mustload.go:66] Loading cluster: ha-307893
	I1123 08:41:32.343743  171337 notify.go:221] Checking for updates...
	I1123 08:41:32.344101  171337 config.go:182] Loaded profile config "ha-307893": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:41:32.344122  171337 status.go:174] checking status of ha-307893 ...
	I1123 08:41:32.344563  171337 cli_runner.go:164] Run: docker container inspect ha-307893 --format={{.State.Status}}
	I1123 08:41:32.366674  171337 status.go:371] ha-307893 host status = "Running" (err=<nil>)
	I1123 08:41:32.366695  171337 host.go:66] Checking if "ha-307893" exists ...
	I1123 08:41:32.366978  171337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-307893
	I1123 08:41:32.387241  171337 host.go:66] Checking if "ha-307893" exists ...
	I1123 08:41:32.387577  171337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:41:32.387645  171337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-307893
	I1123 08:41:32.405535  171337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/ha-307893/id_rsa Username:docker}
	I1123 08:41:32.506625  171337 ssh_runner.go:195] Run: systemctl --version
	I1123 08:41:32.513268  171337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:41:32.527520  171337 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:41:32.590280  171337 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-23 08:41:32.579109691 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:41:32.590838  171337 kubeconfig.go:125] found "ha-307893" server: "https://192.168.49.254:8443"
	I1123 08:41:32.590872  171337 api_server.go:166] Checking apiserver status ...
	I1123 08:41:32.590917  171337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:41:32.602709  171337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	W1123 08:41:32.611627  171337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:41:32.611696  171337 ssh_runner.go:195] Run: ls
	I1123 08:41:32.615518  171337 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:41:32.619531  171337 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:41:32.619551  171337 status.go:463] ha-307893 apiserver status = Running (err=<nil>)
	I1123 08:41:32.619576  171337 status.go:176] ha-307893 status: &{Name:ha-307893 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:41:32.619597  171337 status.go:174] checking status of ha-307893-m02 ...
	I1123 08:41:32.619820  171337 cli_runner.go:164] Run: docker container inspect ha-307893-m02 --format={{.State.Status}}
	I1123 08:41:32.640560  171337 status.go:371] ha-307893-m02 host status = "Stopped" (err=<nil>)
	I1123 08:41:32.640597  171337 status.go:384] host is not running, skipping remaining checks
	I1123 08:41:32.640605  171337 status.go:176] ha-307893-m02 status: &{Name:ha-307893-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:41:32.640629  171337 status.go:174] checking status of ha-307893-m03 ...
	I1123 08:41:32.640888  171337 cli_runner.go:164] Run: docker container inspect ha-307893-m03 --format={{.State.Status}}
	I1123 08:41:32.658523  171337 status.go:371] ha-307893-m03 host status = "Running" (err=<nil>)
	I1123 08:41:32.658549  171337 host.go:66] Checking if "ha-307893-m03" exists ...
	I1123 08:41:32.658808  171337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-307893-m03
	I1123 08:41:32.676602  171337 host.go:66] Checking if "ha-307893-m03" exists ...
	I1123 08:41:32.676921  171337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:41:32.676989  171337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-307893-m03
	I1123 08:41:32.695631  171337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/ha-307893-m03/id_rsa Username:docker}
	I1123 08:41:32.795409  171337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:41:32.808187  171337 kubeconfig.go:125] found "ha-307893" server: "https://192.168.49.254:8443"
	I1123 08:41:32.808221  171337 api_server.go:166] Checking apiserver status ...
	I1123 08:41:32.808263  171337 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:41:32.821147  171337 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup
	W1123 08:41:32.830183  171337 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1168/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:41:32.830233  171337 ssh_runner.go:195] Run: ls
	I1123 08:41:32.833996  171337 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1123 08:41:32.839076  171337 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1123 08:41:32.839096  171337 status.go:463] ha-307893-m03 apiserver status = Running (err=<nil>)
	I1123 08:41:32.839104  171337 status.go:176] ha-307893-m03 status: &{Name:ha-307893-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:41:32.839118  171337 status.go:174] checking status of ha-307893-m04 ...
	I1123 08:41:32.839330  171337 cli_runner.go:164] Run: docker container inspect ha-307893-m04 --format={{.State.Status}}
	I1123 08:41:32.859172  171337 status.go:371] ha-307893-m04 host status = "Running" (err=<nil>)
	I1123 08:41:32.859200  171337 host.go:66] Checking if "ha-307893-m04" exists ...
	I1123 08:41:32.859436  171337 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-307893-m04
	I1123 08:41:32.878640  171337 host.go:66] Checking if "ha-307893-m04" exists ...
	I1123 08:41:32.878984  171337 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:41:32.879038  171337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-307893-m04
	I1123 08:41:32.897244  171337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/ha-307893-m04/id_rsa Username:docker}
	I1123 08:41:32.996010  171337 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:41:33.008299  171337 status.go:176] ha-307893-m04 status: &{Name:ha-307893-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 node start m02 --alsologtostderr -v 5: (13.235585516s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 stop --alsologtostderr -v 5
E1123 08:41:54.521143  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 stop --alsologtostderr -v 5: (53.713427664s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 start --wait true --alsologtostderr -v 5
E1123 08:43:10.884603  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:10.891010  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:10.902402  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:10.923804  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:10.965361  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:11.047105  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:11.208700  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:11.530184  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:12.172035  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:13.453643  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:16.015566  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:17.585569  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:21.137053  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 08:43:31.378441  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 start --wait true --alsologtostderr -v 5: (1m3.935446259s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (117.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 node delete m03 --alsologtostderr -v 5
E1123 08:43:51.860136  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 node delete m03 --alsologtostderr -v 5: (9.778861518s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 stop --alsologtostderr -v 5
E1123 08:44:32.822130  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 stop --alsologtostderr -v 5: (41.538901552s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5: exit status 7 (121.790944ms)

                                                
                                                
-- stdout --
	ha-307893
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-307893-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-307893-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:44:39.619684  185690 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:44:39.619795  185690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:39.619804  185690 out.go:374] Setting ErrFile to fd 2...
	I1123 08:44:39.619809  185690 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:44:39.620026  185690 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:44:39.620194  185690 out.go:368] Setting JSON to false
	I1123 08:44:39.620221  185690 mustload.go:66] Loading cluster: ha-307893
	I1123 08:44:39.620353  185690 notify.go:221] Checking for updates...
	I1123 08:44:39.620647  185690 config.go:182] Loaded profile config "ha-307893": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:44:39.620667  185690 status.go:174] checking status of ha-307893 ...
	I1123 08:44:39.621286  185690 cli_runner.go:164] Run: docker container inspect ha-307893 --format={{.State.Status}}
	I1123 08:44:39.640492  185690 status.go:371] ha-307893 host status = "Stopped" (err=<nil>)
	I1123 08:44:39.640524  185690 status.go:384] host is not running, skipping remaining checks
	I1123 08:44:39.640533  185690 status.go:176] ha-307893 status: &{Name:ha-307893 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:44:39.640576  185690 status.go:174] checking status of ha-307893-m02 ...
	I1123 08:44:39.640857  185690 cli_runner.go:164] Run: docker container inspect ha-307893-m02 --format={{.State.Status}}
	I1123 08:44:39.658646  185690 status.go:371] ha-307893-m02 host status = "Stopped" (err=<nil>)
	I1123 08:44:39.658676  185690 status.go:384] host is not running, skipping remaining checks
	I1123 08:44:39.658686  185690 status.go:176] ha-307893-m02 status: &{Name:ha-307893-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:44:39.658711  185690 status.go:174] checking status of ha-307893-m04 ...
	I1123 08:44:39.658945  185690 cli_runner.go:164] Run: docker container inspect ha-307893-m04 --format={{.State.Status}}
	I1123 08:44:39.676893  185690 status.go:371] ha-307893-m04 host status = "Stopped" (err=<nil>)
	I1123 08:44:39.676931  185690 status.go:384] host is not running, skipping remaining checks
	I1123 08:44:39.676939  185690 status.go:176] ha-307893-m04 status: &{Name:ha-307893-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (54.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (53.459996037s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (54.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 node add --control-plane --alsologtostderr -v 5
E1123 08:45:54.744416  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-307893 node add --control-plane --alsologtostderr -v 5: (42.997187592s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-307893 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.93s)

                                                
                                    
x
+
TestJSONOutput/start/Command (40.46s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-430618 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1123 08:46:54.520261  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-430618 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (40.46074943s)
--- PASS: TestJSONOutput/start/Command (40.46s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.08s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-430618 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-430618 --output=json --user=testUser: (6.084747541s)
--- PASS: TestJSONOutput/stop/Command (6.08s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-084689 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-084689 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (78.500289ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"4376e9d6-4d3e-428e-a937-ff7b18405f44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-084689] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"78492798-924c-475c-9b2b-aea479744293","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"70489c77-d126-4bf0-9c1a-03815ee6d499","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"0b754b40-b769-4c95-b714-d8b99daf3b7c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig"}}
	{"specversion":"1.0","id":"9c653844-b501-4de8-9860-e2c10df76075","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube"}}
	{"specversion":"1.0","id":"6db529c5-0043-4b77-ac3b-7386cc78f1d7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"a3d6634c-e189-4f53-a555-65cd2d1b7e01","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"866c0273-fe5f-4d2d-902b-2685d5c7b6b2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-084689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-084689
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.49s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-022177 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-022177 --network=: (32.367857916s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-022177" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-022177
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-022177: (2.104219592s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.49s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.29s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-712077 --network=bridge
E1123 08:48:10.885704  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-712077 --network=bridge: (23.266146944s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-712077" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-712077
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-712077: (2.003252183s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.29s)

                                                
                                    
x
+
TestKicExistingNetwork (22.67s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1123 08:48:22.698263  107234 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1123 08:48:22.715883  107234 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1123 08:48:22.715986  107234 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1123 08:48:22.716015  107234 cli_runner.go:164] Run: docker network inspect existing-network
W1123 08:48:22.732695  107234 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1123 08:48:22.732730  107234 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1123 08:48:22.732758  107234 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1123 08:48:22.732908  107234 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1123 08:48:22.749051  107234 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-f35ea3fda0f8 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:12:67:c4:67:42:d0} reservation:<nil>}
I1123 08:48:22.749616  107234 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00050b790}
I1123 08:48:22.749656  107234 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1123 08:48:22.749710  107234 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1123 08:48:22.801113  107234 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-135505 --network=existing-network
E1123 08:48:38.586417  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-135505 --network=existing-network: (20.534965482s)
helpers_test.go:175: Cleaning up "existing-network-135505" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-135505
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-135505: (1.99973466s)
I1123 08:48:45.354107  107234 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.67s)

                                                
                                    
x
+
TestKicCustomSubnet (26.59s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-935754 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-935754 --subnet=192.168.60.0/24: (24.445181169s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-935754 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-935754" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-935754
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-935754: (2.121737333s)
--- PASS: TestKicCustomSubnet (26.59s)

                                                
                                    
x
+
TestKicStaticIP (23.47s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-612132 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-612132 --static-ip=192.168.200.200: (21.172106681s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-612132 ip
helpers_test.go:175: Cleaning up "static-ip-612132" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-612132
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-612132: (2.145741953s)
--- PASS: TestKicStaticIP (23.47s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (49.7s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-963279 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-963279 --driver=docker  --container-runtime=crio: (23.151619568s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-976182 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-976182 --driver=docker  --container-runtime=crio: (20.469050664s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-963279
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-976182
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-976182" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-976182
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-976182: (2.391008626s)
helpers_test.go:175: Cleaning up "first-963279" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-963279
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-963279: (2.381299435s)
--- PASS: TestMinikubeProfile (49.70s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.87s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-188501 --memory=3072 --mount-string /tmp/TestMountStartserial378191071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-188501 --memory=3072 --mount-string /tmp/TestMountStartserial378191071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.870344168s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-188501 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.79s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-207606 --memory=3072 --mount-string /tmp/TestMountStartserial378191071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-207606 --memory=3072 --mount-string /tmp/TestMountStartserial378191071/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.785406465s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.79s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207606 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-188501 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-188501 --alsologtostderr -v=5: (1.690633242s)
--- PASS: TestMountStart/serial/DeleteFirst (1.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207606 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-207606
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-207606: (1.25074953s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-207606
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-207606: (7.29643504s)
--- PASS: TestMountStart/serial/RestartStopped (8.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-207606 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.25s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020762 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1123 08:51:54.522002  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020762 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m7.733510319s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.25s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-020762 -- rollout status deployment/busybox: (2.631175254s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-bn8cb -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-dg8n4 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-bn8cb -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-dg8n4 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-bn8cb -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-dg8n4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-bn8cb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-bn8cb -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-dg8n4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-020762 -- exec busybox-7b57f96db7-dg8n4 -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.73s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-020762 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-020762 -v=5 --alsologtostderr: (22.574136467s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.24s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-020762 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp testdata/cp-test.txt multinode-020762:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3603533669/001/cp-test_multinode-020762.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762:/home/docker/cp-test.txt multinode-020762-m02:/home/docker/cp-test_multinode-020762_multinode-020762-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m02 "sudo cat /home/docker/cp-test_multinode-020762_multinode-020762-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762:/home/docker/cp-test.txt multinode-020762-m03:/home/docker/cp-test_multinode-020762_multinode-020762-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m03 "sudo cat /home/docker/cp-test_multinode-020762_multinode-020762-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp testdata/cp-test.txt multinode-020762-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3603533669/001/cp-test_multinode-020762-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762-m02:/home/docker/cp-test.txt multinode-020762:/home/docker/cp-test_multinode-020762-m02_multinode-020762.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762 "sudo cat /home/docker/cp-test_multinode-020762-m02_multinode-020762.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762-m02:/home/docker/cp-test.txt multinode-020762-m03:/home/docker/cp-test_multinode-020762-m02_multinode-020762-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m03 "sudo cat /home/docker/cp-test_multinode-020762-m02_multinode-020762-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp testdata/cp-test.txt multinode-020762-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3603533669/001/cp-test_multinode-020762-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762-m03:/home/docker/cp-test.txt multinode-020762:/home/docker/cp-test_multinode-020762-m03_multinode-020762.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762 "sudo cat /home/docker/cp-test_multinode-020762-m03_multinode-020762.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 cp multinode-020762-m03:/home/docker/cp-test.txt multinode-020762-m02:/home/docker/cp-test_multinode-020762-m03_multinode-020762-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 ssh -n multinode-020762-m02 "sudo cat /home/docker/cp-test_multinode-020762-m03_multinode-020762-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.05s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-020762 node stop m03: (1.281853009s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020762 status: exit status 7 (519.703613ms)

                                                
                                                
-- stdout --
	multinode-020762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-020762-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-020762-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr: exit status 7 (520.330942ms)

                                                
                                                
-- stdout --
	multinode-020762
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-020762-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-020762-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:52:43.971415  245092 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:52:43.971560  245092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:52:43.971575  245092 out.go:374] Setting ErrFile to fd 2...
	I1123 08:52:43.971613  245092 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:52:43.971844  245092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:52:43.972068  245092 out.go:368] Setting JSON to false
	I1123 08:52:43.972107  245092 mustload.go:66] Loading cluster: multinode-020762
	I1123 08:52:43.972239  245092 notify.go:221] Checking for updates...
	I1123 08:52:43.972590  245092 config.go:182] Loaded profile config "multinode-020762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:52:43.972617  245092 status.go:174] checking status of multinode-020762 ...
	I1123 08:52:43.973165  245092 cli_runner.go:164] Run: docker container inspect multinode-020762 --format={{.State.Status}}
	I1123 08:52:43.992258  245092 status.go:371] multinode-020762 host status = "Running" (err=<nil>)
	I1123 08:52:43.992290  245092 host.go:66] Checking if "multinode-020762" exists ...
	I1123 08:52:43.992564  245092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-020762
	I1123 08:52:44.010422  245092 host.go:66] Checking if "multinode-020762" exists ...
	I1123 08:52:44.010666  245092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:52:44.010705  245092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-020762
	I1123 08:52:44.028465  245092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/multinode-020762/id_rsa Username:docker}
	I1123 08:52:44.127439  245092 ssh_runner.go:195] Run: systemctl --version
	I1123 08:52:44.134238  245092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:52:44.147348  245092 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:52:44.211183  245092 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:52 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-23 08:52:44.199843932 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:52:44.212148  245092 kubeconfig.go:125] found "multinode-020762" server: "https://192.168.67.2:8443"
	I1123 08:52:44.212183  245092 api_server.go:166] Checking apiserver status ...
	I1123 08:52:44.212224  245092 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1123 08:52:44.224113  245092 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	W1123 08:52:44.233021  245092 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1123 08:52:44.233075  245092 ssh_runner.go:195] Run: ls
	I1123 08:52:44.237138  245092 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1123 08:52:44.241355  245092 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1123 08:52:44.241379  245092 status.go:463] multinode-020762 apiserver status = Running (err=<nil>)
	I1123 08:52:44.241391  245092 status.go:176] multinode-020762 status: &{Name:multinode-020762 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:52:44.241411  245092 status.go:174] checking status of multinode-020762-m02 ...
	I1123 08:52:44.241643  245092 cli_runner.go:164] Run: docker container inspect multinode-020762-m02 --format={{.State.Status}}
	I1123 08:52:44.260680  245092 status.go:371] multinode-020762-m02 host status = "Running" (err=<nil>)
	I1123 08:52:44.260703  245092 host.go:66] Checking if "multinode-020762-m02" exists ...
	I1123 08:52:44.260961  245092 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-020762-m02
	I1123 08:52:44.279106  245092 host.go:66] Checking if "multinode-020762-m02" exists ...
	I1123 08:52:44.279469  245092 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1123 08:52:44.279519  245092 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-020762-m02
	I1123 08:52:44.297821  245092 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21969-103686/.minikube/machines/multinode-020762-m02/id_rsa Username:docker}
	I1123 08:52:44.397607  245092 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1123 08:52:44.410574  245092 status.go:176] multinode-020762-m02 status: &{Name:multinode-020762-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:52:44.410634  245092 status.go:174] checking status of multinode-020762-m03 ...
	I1123 08:52:44.410945  245092 cli_runner.go:164] Run: docker container inspect multinode-020762-m03 --format={{.State.Status}}
	I1123 08:52:44.429480  245092 status.go:371] multinode-020762-m03 host status = "Stopped" (err=<nil>)
	I1123 08:52:44.429512  245092 status.go:384] host is not running, skipping remaining checks
	I1123 08:52:44.429521  245092 status.go:176] multinode-020762-m03 status: &{Name:multinode-020762-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-020762 node start m03 -v=5 --alsologtostderr: (6.545223842s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.28s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-020762
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-020762
E1123 08:53:10.889125  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-020762: (31.411467512s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020762 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020762 --wait=true -v=5 --alsologtostderr: (50.924703446s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-020762
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-020762 node delete m03: (4.658703248s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.29s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-020762 stop: (30.258179859s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020762 status: exit status 7 (101.440323ms)

                                                
                                                
-- stdout --
	multinode-020762
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-020762-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr: exit status 7 (99.947081ms)

                                                
                                                
-- stdout --
	multinode-020762
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-020762-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:54:49.883826  254878 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:54:49.884097  254878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:54:49.884106  254878 out.go:374] Setting ErrFile to fd 2...
	I1123 08:54:49.884110  254878 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:54:49.884319  254878 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:54:49.884490  254878 out.go:368] Setting JSON to false
	I1123 08:54:49.884520  254878 mustload.go:66] Loading cluster: multinode-020762
	I1123 08:54:49.884659  254878 notify.go:221] Checking for updates...
	I1123 08:54:49.884837  254878 config.go:182] Loaded profile config "multinode-020762": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:54:49.884853  254878 status.go:174] checking status of multinode-020762 ...
	I1123 08:54:49.885346  254878 cli_runner.go:164] Run: docker container inspect multinode-020762 --format={{.State.Status}}
	I1123 08:54:49.903363  254878 status.go:371] multinode-020762 host status = "Stopped" (err=<nil>)
	I1123 08:54:49.903384  254878 status.go:384] host is not running, skipping remaining checks
	I1123 08:54:49.903391  254878 status.go:176] multinode-020762 status: &{Name:multinode-020762 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1123 08:54:49.903428  254878 status.go:174] checking status of multinode-020762-m02 ...
	I1123 08:54:49.903667  254878 cli_runner.go:164] Run: docker container inspect multinode-020762-m02 --format={{.State.Status}}
	I1123 08:54:49.922795  254878 status.go:371] multinode-020762-m02 host status = "Stopped" (err=<nil>)
	I1123 08:54:49.922817  254878 status.go:384] host is not running, skipping remaining checks
	I1123 08:54:49.922824  254878 status.go:176] multinode-020762-m02 status: &{Name:multinode-020762-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020762 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020762 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (43.826351239s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-020762 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.46s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-020762
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020762-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-020762-m02 --driver=docker  --container-runtime=crio: exit status 14 (76.377577ms)

                                                
                                                
-- stdout --
	* [multinode-020762-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-020762-m02' is duplicated with machine name 'multinode-020762-m02' in profile 'multinode-020762'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-020762-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-020762-m03 --driver=docker  --container-runtime=crio: (23.30213422s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-020762
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-020762: exit status 80 (301.369412ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-020762 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-020762-m03 already exists in multinode-020762-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-020762-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-020762-m03: (2.362668731s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.10s)

                                                
                                    
x
+
TestPreload (114.75s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-940070 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-940070 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (48.976512427s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-940070 image pull gcr.io/k8s-minikube/busybox
E1123 08:56:54.520796  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-940070 image pull gcr.io/k8s-minikube/busybox: (2.203671621s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-940070
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-940070: (5.903968415s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-940070 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-940070 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (55.019957508s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-940070 image list
helpers_test.go:175: Cleaning up "test-preload-940070" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-940070
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-940070: (2.413220486s)
--- PASS: TestPreload (114.75s)

                                                
                                    
x
+
TestScheduledStopUnix (98.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-035380 --memory=3072 --driver=docker  --container-runtime=crio
E1123 08:58:10.887229  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-035380 --memory=3072 --driver=docker  --container-runtime=crio: (22.058781572s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035380 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:58:21.626386  271956 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:21.626516  271956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:21.626527  271956 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:21.626534  271956 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:21.626736  271956 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:58:21.626995  271956 out.go:368] Setting JSON to false
	I1123 08:58:21.627089  271956 mustload.go:66] Loading cluster: scheduled-stop-035380
	I1123 08:58:21.627380  271956 config.go:182] Loaded profile config "scheduled-stop-035380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:21.627437  271956 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/config.json ...
	I1123 08:58:21.627615  271956 mustload.go:66] Loading cluster: scheduled-stop-035380
	I1123 08:58:21.627713  271956 config.go:182] Loaded profile config "scheduled-stop-035380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-035380 -n scheduled-stop-035380
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035380 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:58:22.037311  272105 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:22.037595  272105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:22.037607  272105 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:22.037613  272105 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:22.037814  272105 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:58:22.038125  272105 out.go:368] Setting JSON to false
	I1123 08:58:22.038372  272105 daemonize_unix.go:73] killing process 271990 as it is an old scheduled stop
	I1123 08:58:22.038487  272105 mustload.go:66] Loading cluster: scheduled-stop-035380
	I1123 08:58:22.038927  272105 config.go:182] Loaded profile config "scheduled-stop-035380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:22.039030  272105 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/config.json ...
	I1123 08:58:22.039249  272105 mustload.go:66] Loading cluster: scheduled-stop-035380
	I1123 08:58:22.039380  272105 config.go:182] Loaded profile config "scheduled-stop-035380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1123 08:58:22.044212  107234 retry.go:31] will retry after 73.682µs: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.045389  107234 retry.go:31] will retry after 75.574µs: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.046535  107234 retry.go:31] will retry after 153.575µs: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.047661  107234 retry.go:31] will retry after 272.28µs: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.048807  107234 retry.go:31] will retry after 428.376µs: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.049931  107234 retry.go:31] will retry after 708.252µs: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.051031  107234 retry.go:31] will retry after 1.554441ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.053239  107234 retry.go:31] will retry after 999.954µs: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.054375  107234 retry.go:31] will retry after 3.333565ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.058569  107234 retry.go:31] will retry after 2.782762ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.061816  107234 retry.go:31] will retry after 6.416053ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.069049  107234 retry.go:31] will retry after 11.629359ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.081278  107234 retry.go:31] will retry after 6.607715ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.089163  107234 retry.go:31] will retry after 11.579181ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.101426  107234 retry.go:31] will retry after 31.931423ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
I1123 08:58:22.133925  107234 retry.go:31] will retry after 46.399181ms: open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035380 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-035380 -n scheduled-stop-035380
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-035380
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-035380 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1123 08:58:47.962979  272747 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:58:47.963124  272747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:47.963135  272747 out.go:374] Setting ErrFile to fd 2...
	I1123 08:58:47.963141  272747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:58:47.963345  272747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:58:47.963594  272747 out.go:368] Setting JSON to false
	I1123 08:58:47.963700  272747 mustload.go:66] Loading cluster: scheduled-stop-035380
	I1123 08:58:47.964317  272747 config.go:182] Loaded profile config "scheduled-stop-035380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:58:47.964428  272747 profile.go:143] Saving config to /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/scheduled-stop-035380/config.json ...
	I1123 08:58:47.964666  272747 mustload.go:66] Loading cluster: scheduled-stop-035380
	I1123 08:58:47.964820  272747 config.go:182] Loaded profile config "scheduled-stop-035380": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-035380
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-035380: exit status 7 (78.408174ms)

                                                
                                                
-- stdout --
	scheduled-stop-035380
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-035380 -n scheduled-stop-035380
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-035380 -n scheduled-stop-035380: exit status 7 (81.023514ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-035380" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-035380
E1123 08:59:33.949879  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-035380: (5.007169276s)
--- PASS: TestScheduledStopUnix (98.63s)

                                                
                                    
x
+
TestInsufficientStorage (12.47s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-169731 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-169731 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.963867123s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"cb5db550-e497-45c1-8e22-ec31df57edea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-169731] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"76b6f48d-768b-4a7e-b917-f2c9ceceafbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21969"}}
	{"specversion":"1.0","id":"8f2f016e-b78a-476c-995b-e20d2749dcda","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"7540f460-ff23-44c7-8945-af102df4e46b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig"}}
	{"specversion":"1.0","id":"bf9606fa-4624-4038-9d3c-cf5fd0b87c1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube"}}
	{"specversion":"1.0","id":"7bc999d1-a480-474c-bcd0-e3cc30adeea5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"69550b5f-ea49-4054-b0f6-d4a997681006","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"890dd0a7-c899-4ddc-9cb6-cd9707726e2e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5fd0b634-7c24-49cb-9c26-24c1186170f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"81e64e1d-65e2-4667-beaf-f0ed8e71943b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"51959e66-fa36-4fc4-a493-1d81e912ce83","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"44b57b05-6c65-47e9-bc90-071c04756527","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-169731\" primary control-plane node in \"insufficient-storage-169731\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"a51f95ed-9320-4fa1-a53a-5e02d22e6222","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"1f5bf794-6490-4497-89d5-8bb83e465d2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"adf3ae5f-9a21-4bb4-989b-2386a41e8dba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-169731 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-169731 --output=json --layout=cluster: exit status 7 (302.355889ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-169731","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-169731","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:59:48.391962  275284 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-169731" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-169731 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-169731 --output=json --layout=cluster: exit status 7 (300.763167ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-169731","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-169731","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1123 08:59:48.693739  275395 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-169731" does not appear in /home/jenkins/minikube-integration/21969-103686/kubeconfig
	E1123 08:59:48.703909  275395 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/insufficient-storage-169731/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-169731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-169731
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-169731: (1.901788302s)
--- PASS: TestInsufficientStorage (12.47s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (58.92s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2693829556 start -p running-upgrade-760153 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2693829556 start -p running-upgrade-760153 --memory=3072 --vm-driver=docker  --container-runtime=crio: (29.956274922s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-760153 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-760153 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.782032518s)
helpers_test.go:175: Cleaning up "running-upgrade-760153" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-760153
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-760153: (2.476119296s)
--- PASS: TestRunningBinaryUpgrade (58.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (309.82s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.532183969s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-064370
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-064370: (5.748132553s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-064370 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-064370 status --format={{.Host}}: exit status 7 (92.983771ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.662162153s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-064370 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (108.343232ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-064370] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-064370
	    minikube start -p kubernetes-upgrade-064370 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0643702 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-064370 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-064370 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.090435779s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-064370" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-064370
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-064370: (2.513731719s)
--- PASS: TestKubernetesUpgrade (309.82s)

                                                
                                    
x
+
TestMissingContainerUpgrade (125.1s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.935433739 start -p missing-upgrade-265184 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.935433739 start -p missing-upgrade-265184 --memory=3072 --driver=docker  --container-runtime=crio: (1m13.017415636s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-265184
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-265184: (1.677214965s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-265184
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-265184 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-265184 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (44.250695657s)
helpers_test.go:175: Cleaning up "missing-upgrade-265184" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-265184
E1123 09:01:54.520250  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-265184: (2.635026089s)
--- PASS: TestMissingContainerUpgrade (125.10s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (3.34s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (3.34s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (89.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2134555804 start -p stopped-upgrade-248610 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2134555804 start -p stopped-upgrade-248610 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m12.691788131s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2134555804 -p stopped-upgrade-248610 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2134555804 -p stopped-upgrade-248610 stop: (2.448627386s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-248610 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-248610 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.105930691s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (89.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.8s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-741183 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-741183 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (679.206815ms)

                                                
                                                
-- stdout --
	* [false-741183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1123 08:59:56.054040  277248 out.go:360] Setting OutFile to fd 1 ...
	I1123 08:59:56.054411  277248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:56.054423  277248 out.go:374] Setting ErrFile to fd 2...
	I1123 08:59:56.054428  277248 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1123 08:59:56.054679  277248 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21969-103686/.minikube/bin
	I1123 08:59:56.055299  277248 out.go:368] Setting JSON to false
	I1123 08:59:56.056346  277248 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-9","uptime":6136,"bootTime":1763882260,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1123 08:59:56.056415  277248 start.go:143] virtualization: kvm guest
	I1123 08:59:56.076982  277248 out.go:179] * [false-741183] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1123 08:59:56.202251  277248 notify.go:221] Checking for updates...
	I1123 08:59:56.202523  277248 out.go:179]   - MINIKUBE_LOCATION=21969
	I1123 08:59:56.225019  277248 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1123 08:59:56.245205  277248 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	I1123 08:59:56.295137  277248 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	I1123 08:59:56.297795  277248 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1123 08:59:56.437907  277248 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1123 08:59:56.562984  277248 config.go:182] Loaded profile config "offline-crio-228886": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1123 08:59:56.563140  277248 driver.go:422] Setting default libvirt URI to qemu:///system
	I1123 08:59:56.586108  277248 docker.go:124] docker version: linux-29.0.2:Docker Engine - Community
	I1123 08:59:56.586204  277248 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1123 08:59:56.659390  277248 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:28 OomKillDisable:false NGoroutines:47 SystemTime:2025-11-23 08:59:56.647066941 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-9 Labels:[] ExperimentalBuild:false ServerVersion:29.0.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1123 08:59:56.659535  277248 docker.go:319] overlay module found
	I1123 08:59:56.661749  277248 out.go:179] * Using the docker driver based on user configuration
	I1123 08:59:56.663007  277248 start.go:309] selected driver: docker
	I1123 08:59:56.663032  277248 start.go:927] validating driver "docker" against <nil>
	I1123 08:59:56.663049  277248 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1123 08:59:56.665027  277248 out.go:203] 
	W1123 08:59:56.666099  277248 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1123 08:59:56.667257  277248 out.go:203] 

                                                
                                                
** /stderr **
E1123 08:59:57.587184  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/addons-450053/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:88: 
----------------------- debugLogs start: false-741183 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-741183" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-741183

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-741183"

                                                
                                                
----------------------- debugLogs end: false-741183 [took: 3.924020315s] --------------------------------
helpers_test.go:175: Cleaning up "false-741183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-741183
--- PASS: TestNetworkPlugins/group/false (4.80s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-248610
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.96s)

                                                
                                    
x
+
TestPause/serial/Start (45.65s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-397202 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-397202 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.652675986s)
--- PASS: TestPause/serial/Start (45.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-457254 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-457254 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (107.12221ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-457254] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21969
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21969-103686/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21969-103686/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (20.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-457254 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-457254 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (19.909829035s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-457254 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (20.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (16.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-457254 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-457254 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (13.77414675s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-457254 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-457254 status -o json: exit status 2 (330.417455ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-457254","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-457254
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-457254: (1.986965363s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (16.09s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-397202 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-397202 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.828278082s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-457254 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-457254 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.092653421s)
--- PASS: TestNoKubernetes/serial/Start (7.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21969-103686/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-457254 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-457254 "sudo systemctl is-active --quiet service kubelet": exit status 1 (307.02072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (15.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
E1123 09:03:10.884442  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:194: (dbg) Done: out/minikube-linux-amd64 profile list: (14.564344846s)
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (15.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-457254
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-457254: (1.267088902s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.65s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-457254 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-457254 --driver=docker  --container-runtime=crio: (7.650894437s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.358236948s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-457254 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-457254 "sudo systemctl is-active --quiet service kubelet": exit status 1 (291.24623ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (41.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (41.571015164s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (41.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-741183 "pgrep -a kubelet"
I1123 09:03:58.828916  107234 config.go:182] Loaded profile config "auto-741183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-741183 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7wxlg" [8a57b37c-cdbf-4425-b065-4cfac681d7d6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7wxlg" [8a57b37c-cdbf-4425-b065-4cfac681d7d6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 8.004122337s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-741183 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-h799q" [4582170b-3a1c-48ae-9a36-474b31dcbfa2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003879014s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-741183 "pgrep -a kubelet"
I1123 09:04:16.132268  107234 config.go:182] Loaded profile config "kindnet-741183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-741183 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6c2bs" [35171958-dd3b-44a6-920d-81ab87c2423d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6c2bs" [35171958-dd3b-44a6-920d-81ab87c2423d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003222313s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-741183 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.552191949s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (48.998608201s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (49.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (36.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (36.970359506s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (36.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-5hhlh" [1ca9be46-ca37-4011-9d6b-d70d0f924726] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-5hhlh" [1ca9be46-ca37-4011-9d6b-d70d0f924726] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00481617s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-741183 "pgrep -a kubelet"
I1123 09:05:24.588654  107234 config.go:182] Loaded profile config "calico-741183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-741183 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-nh2tt" [8f57f7a7-730f-4226-95e8-eb6d4ded8443] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-nh2tt" [8f57f7a7-730f-4226-95e8-eb6d4ded8443] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004747196s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.087652865s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-741183 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-741183 "pgrep -a kubelet"
I1123 09:05:34.678681  107234 config.go:182] Loaded profile config "custom-flannel-741183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-741183 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gfx97" [9b1f6ce6-e58b-4efa-96b6-99ed6a990be2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gfx97" [9b1f6ce6-e58b-4efa-96b6-99ed6a990be2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.003948238s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-741183 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-741183 "pgrep -a kubelet"
I1123 09:05:54.148277  107234 config.go:182] Loaded profile config "enable-default-cni-741183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-741183 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k5rpx" [5e4125cb-e54b-4c21-a25e-7705aaaeb74b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k5rpx" [5e4125cb-e54b-4c21-a25e-7705aaaeb74b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004887233s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (41.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-741183 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (41.715445664s)
--- PASS: TestNetworkPlugins/group/bridge/Start (41.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-741183 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.573086046s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-dfdbm" [5b2d7c55-b038-45c8-b12b-881e90ada4e3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005716132s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (56.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (56.383763232s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (56.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-741183 "pgrep -a kubelet"
I1123 09:06:27.322888  107234 config.go:182] Loaded profile config "flannel-741183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-741183 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s8dz6" [063b411d-5e4e-4e40-bbf0-67a5a909209c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s8dz6" [063b411d-5e4e-4e40-bbf0-67a5a909209c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.003751419s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-741183 "pgrep -a kubelet"
I1123 09:06:37.447662  107234 config.go:182] Loaded profile config "bridge-741183": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-741183 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xv4kz" [a94a8e91-4e1c-474b-b754-7f7edcd00759] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xv4kz" [a94a8e91-4e1c-474b-b754-7f7edcd00759] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.004102442s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-741183 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-741183 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-741183 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)
E1123 09:08:59.001219  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:59.007921  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:59.019287  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:59.040763  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:59.083073  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:59.165004  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:59.326599  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:08:59.648879  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (41.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (41.229820732s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (41.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-054094 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [45bf2904-a260-4a9c-9bb1-efedb8776977] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [45bf2904-a260-4a9c-9bb1-efedb8776977] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003885325s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-054094 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.77s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.765104434s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.77s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-054094 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-054094 --alsologtostderr -v=3: (16.136763638s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-619589 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [28bf9ee2-1ef2-48b8-81bb-3529cc01dc8c] Pending
helpers_test.go:352: "busybox" [28bf9ee2-1ef2-48b8-81bb-3529cc01dc8c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [28bf9ee2-1ef2-48b8-81bb-3529cc01dc8c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.003033457s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-619589 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-619589 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-619589 --alsologtostderr -v=3: (16.35177095s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094: exit status 7 (88.645052ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-054094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (43.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-054094 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (42.661861379s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-054094 -n old-k8s-version-054094
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (43.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-529341 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [05390c6f-b2aa-4701-8a3d-9119282e9b94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [05390c6f-b2aa-4701-8a3d-9119282e9b94] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00413976s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-529341 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589: exit status 7 (83.148677ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-619589 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (50.52s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-619589 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.091730916s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-619589 -n no-preload-619589
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (50.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-602386 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [4a775da7-4f9d-4680-9fb4-7d598e9e8512] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [4a775da7-4f9d-4680-9fb4-7d598e9e8512] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004307276s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-602386 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-529341 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-529341 --alsologtostderr -v=3: (18.585189268s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-602386 --alsologtostderr -v=3
E1123 09:08:10.884729  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/functional-709702/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-602386 --alsologtostderr -v=3: (16.394822054s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341: exit status 7 (82.743653ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-529341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (47.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-529341 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.41315357s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-529341 -n embed-certs-529341
E1123 09:09:00.290870  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (47.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-smgkc" [9aeb7744-7444-4754-a199-8a503b630d8b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004161104s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386: exit status 7 (92.392866ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-602386 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-602386 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (53.276305874s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-602386 -n default-k8s-diff-port-602386
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (53.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-smgkc" [9aeb7744-7444-4754-a199-8a503b630d8b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004076501s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-054094 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-054094 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d5gfp" [712cadaa-769d-4ff2-a7d3-2d9a8a8bf56e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.006086727s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (26.583348558s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-d5gfp" [712cadaa-769d-4ff2-a7d3-2d9a8a8bf56e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004338609s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-619589 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-619589 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rvlmt" [ba819018-1e9f-492a-8282-cbb1801bf72e] Running
E1123 09:09:01.572391  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:04.134100  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004317758s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-rvlmt" [ba819018-1e9f-492a-8282-cbb1801bf72e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003582097s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-529341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-531046 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-531046 --alsologtostderr -v=3: (2.530438633s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046: exit status 7 (82.17953ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-531046 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1123 09:09:09.255515  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.98s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1123 09:09:09.824296  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:09.830753  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:09.842142  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:09.863588  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:09.905071  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:09.986356  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:10.147832  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:10.469480  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1123 09:09:11.111688  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-531046 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.624898727s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-531046 -n newest-cni-531046
E1123 09:09:20.076911  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/kindnet-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-529341 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kvdxq" [a2c64126-6d33-4b13-b583-f9b044a3f500] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003855768s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kvdxq" [a2c64126-6d33-4b13-b583-f9b044a3f500] Running
E1123 09:09:19.497619  107234 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21969-103686/.minikube/profiles/auto-741183/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004829684s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-602386 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.08s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-531046 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-602386 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.27s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-741183 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-741183" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-741183

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-741183"

                                                
                                                
----------------------- debugLogs end: kubenet-741183 [took: 5.013623545s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-741183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-741183
--- SKIP: TestNetworkPlugins/group/kubenet (5.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (6.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-741183 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-741183" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-741183

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-741183" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-741183"

                                                
                                                
----------------------- debugLogs end: cilium-741183 [took: 5.993069724s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-741183" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-741183
--- SKIP: TestNetworkPlugins/group/cilium (6.19s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-740936" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-740936
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard